Guest Interviews, discussing the possibilities and potential of AI in Austria. Question or Suggestions, write to [email protected]
Die letzten Jahre ware von Large Language Models dominiert, und für die meisten ist GenAI immer noch der Inbegriff von Fortschritt.
Andere denken schon weiter und zielen auf physical AI ab, welche sich darauf spezialisiert KI nicht nur in der digitalen sondern auch in unserere physichen Welt erfolgreich einzusetzen.
Die Krönungsdisziplien in diesem Bereich ist humanoide Robotik und ich habe die Freute heute auf dem Podcast zwei Experten auf dem diesem Gebiet als Gäste begrüssen zu können.
Mathias Hazibar und Christian Tauber sind die Gründer von NEOALP einem Österreichischen Startup, dass Unternehmen dabei unterstüzt moderne Robotik in verschiedenen Gebieten anzuwenden.
Wir sprechen über humanoide Robotik, die es zum Ziel hat autonom agierende Roboter zu entwickeln die in der Lage sind mit minimalen Aufwand Menschliche Tätigkeiten zu erlnen und sicher auszuführen.
Bis solche Roboter einen signifikanten Anteil an Menschlicher Arbeit ersetzen, ist es gewiss noch ein langer Weg, aber ich finde diese Episode gibt einen ersten guten Einblick in wo wir aktuell stehen und wohin die Reise gehen könnte.
Mathias und Christian erklären uns hierzu nicht nur technische Aspekte wie die neuen Vision Language Action (VLA) Modelle,
welche das Herzstück der neuen physical AI Revolution bilden, sondern auch was alles benötigt wird um überhaupt mit humanoider Robotik beginnen zu können.
Warum Unternehmen vielleicht bereits heute sich überlegen sollten, Daten zu sammeln um in naher Zukunft von der neuen Roboter Generation profitieren zu können, wie sie dies am besten tun und wo hierbei die grössten Hürden liegen.
## Referenzen
KI-Agentenplattformen sind in aller Munde. Und obwohl der GenAI-Hype langsam abklingt und viele vor einer KI-Blase zittern, hoffen immer noch andere auf den großen Durchbruch durch sogenannte Agentenplattformen – Systeme, die versprechen, ganze Unternehmensbereiche zu automatisieren.
Doch wie so oft sind viele dieser Versprechungen leer Luft, und bei der Vielzahl an Plattformen und Frameworks, die wie Pilze aus dem Boden schießen, ist es kein Wunder, dass Unternehmen Schwierigkeiten haben, die richtigen Entscheidungen zu treffen und KI sinnvoll einzusetzen.
Ich glaube deshalb, dass die meisten Unternehmen noch eine ganze Weile ohne wirklich funktionierende KI-Agentenlösungen auskommen müssen.
Umso spannender ist das heutige Gespräch mit meinem Gast **Marko Goels**, Geschäftsführer von **Digital Sunray**.
Marko und sein Team haben bereits 2023 eine eigene KI-Agentenplattform entwickelt: **SunrAI**. Eine Plattform, die sich bewusst von klassischen RAG-Workflows absetzt und drei entscheidende Bausteine kombiniert:
1. **Ein Multi-Agenten-System**, in dem spezialisierte Agenten flexibel zu variablen Workflows kombiniert werden.
2. **Einen Smart Data Lake**, der Rohdaten analysiert, anreichert und verdichtet, um bei Anfragen nur die relevantesten Informationen in den LLM-Kontext zu laden – für präzisere Antworten und weniger Halluzinationen.
3. **Einen Knowledge-Management-Layer**, der über sogenannte Routinen das implizite Wissen von Unternehmen abbildet und durch Experten-Feedback stetig erweitert. Wodurch Unternehmen ihre Agenten Schritt für Schritt in ihre Arbeitsweisen und spezifischen Anforderungen einschulen können.
Wir sprechen im Interview sowohl über die **unternehmerische Perspektive** – wie sich eine KI-Plattform in bestehende Strukturen integrieren lässt, ohne Mitarbeiter zu verunsichern – als auch über **technische Aspekte**, etwa warum und vor allem wie saubere Datenaufbereitung umgesetzt werden muss, um komplexe Anwendungen überhaupt zu ermöglichen.
Besonders interessant ist dieses Gespräch für alle, die verstehen wollen, **was eine echte Agentenplattform von simplen Prozessautomatisierungstools wie Zapier oder n8n unterscheidet** – und wie KI schon heute sinnvoll im Unternehmensalltag eingesetzt werden kann.
Viel Spaß beim Zuhören!
In this episode, i am joined by Professor Phanish Puranam from INSEAD, one of the world’s leading thinkers on strategy and organizational design, and this year’s recipient of the prestigious Oscar Morgenstern Medal from the University of Vienna.
Our conversation explores the deep and often hidden ways in which technology and Artificial Intelligence in particular, reshapes organizations—how we work, how we collaborate, and how power is distributed inside organizations. In the first part of the interview, we dive into one of the most fundamental questions in organizational science: centralization versus decentralization. How do technologies, especially communication technologies, shift the balance between empowering workers with autonomy or giving managers unprecedented tools for monitoring and control.
In the second half of our discussion, we turn to generative AI and its impact on how employees skills and expertise are developed within organizations. While GenAI promises efficiency and new forms of collaboration, it also carries the risk of “cognitive offloading”—outsourcing thinking to machines in ways that could erode human competence over time. We consider the tension between treating AI as a tool that enhances human capability, like the abacus, versus one that risks hollowing out expertise, like the calculator. And we confront the very important question of what organizations risk if they replace too many workers with AI agents, resulting in a future where every competitor uses the same AI. In such a world, what’s left to distinguish one company from another?
Phanish makes a compelling case that companies must continue to invest in human-centric organisations—not only because people bring autonomy, competence, and connection, but also because these qualities will be the true sources of competitive advantage in an AI-saturated marketplace.
Seit Anfang des Jahres gibt es in Europa einen starkes politisches Verlangen, sich von den USA unabhängig zu machen. Dies betrifft nicht nur die aktuelle militärische Abhängigkeit, sondern auch die Abhängigkeit von US tech Unternehmen. Besonders interessant für den AAIP ist natürlich die starke abhängigkeit Europas von Amerikanischen und Chinesischen KI Modellen und der Computer Infrastruktur um diese Modelle zu nützen.
Heute auf dem Podcast spreche ich mit Markus Tretzmüller, der Mitbegründer von Cortecs. Einem Österreichischen Unternehmen das es sich zum Ziel gesetzt hat, mittels eines Sky Computing Ansatzes, eine Routing Lösung zu entwickeln die es Europäischen Unternehmen ermöglicht lokale Cloud Anbieter für KI Anwendungen zu nützen. Diese ermöglicht es KI Lösungen zu entwickeln, die im Europäischen Rechtsraum operieren ohne auf die Vorteile von Hyperscalern wie Kosteneffizienz und Ausfallsicherheit verzichten zu müssen.
Im Interview erzählt Markus warum es nicht reicht auf Europäische Neiderlassungen von US Unternehmen zu setzen um Unabhängigkeit und Datensicherheit zu gewährleisten, und welche Vorteile eine routing Lösung wie Cortecs bringen kann.
Viel spass und spannendes zuhören.
## Referenzen
- Cortecs: https://cortecs.ai/ - Building Your Sovereign AI Future
- Sky computing: https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s02-stoica.pdf
- RouteLLM: https://arxiv.org/abs/2406.18665
- FrugalGPT - https://arxiv.org/abs/2305.05176
## Summary
Large Language Models have many strengths and the frontier of what is possible and what they can be used for, is pushed back on the daily bases. One area in which current LLM's need to improve is how they communicate with children. Todays guests, Mathias Neumayer and Dima Rubanov are here to do exactly that, with their newest product LORA - a child friendly AI.
Through they existing product Oscar stories, they identified issues with age appropriate language and gender bias in current LLMS's. With Lora, they are building their own AI friendly solution by fine tuning state of the art LLMs with expert curated data that ensures Lora is generating the appropriate language for children of a specific age.
On the show they will describe how they are building Lora and what they plan to do with it.
### References
- https://oscarstories.com/
- GenBit Score: [https://www.microsoft.com/en-us/research/wp-content/uploads/2021/10/MSJAR_Genbit_Final_Version-616fd3a073758.pdf](https://www.microsoft.com/en-us/research/wp-content/uploads/2021/10/MSJAR_Genbit_Final_Version-616fd3a073758.pdf)
- Counterfactual Reasoning for Bias Evaluation: [https://arxiv.org/abs/2302.08204](https://arxiv.org/abs/2302.08204)
Today on the show I have the pleasure to talk to returning guest, Taylor Peer one of the co-founders of the startup, behind Beat Shaper.
Taylor will explain how they are following an Bottom-up approach to create electronic music, giving producers, fine grained control to create individual music instruments and beat patterns. For this, Beat Shaper is combining Variational Auto-encoders and Transformers. The VAE is used to create high dimensional embeddings that represent the users preferences that are used to guide the autoregressive generation process of the Transformer. The token sequence generated with the transformer is a custom developed symbolic music notation that can be decoded into individual instruments.
We discuss in detail the system architecture and training process. Taylor is explaining in depth how they build such a system, and how they have been creating their own synthetic training dataset that contains music in symbolic notation that enables the fine grained control over the generated music.
I hope you like this episode, and find it useful.
### References
beatshaper.ai - Beatshaper an Copilot for Musics Producers
https://openai.com/index/musenet/ - OpenAI MuseNet
Please create a funny looking comic image, showing a panda with glasses that is very busy creating music on a computer.
Guest in this episode is the Computational Social Scientist Daniel Kondor, Postdoc at the Complexity Science Hub in Vienna.
Daniel is talking about research methods that make it possible to study the impact of various factors like technological development on societies; and in particular their rise or fall, over long periods of time. He explain how modern tools from computational social science, like agent based modelling can be used to study past and future social groups. We talk about his most recent publication that takes a complex systems perspective on the risk AI poses for society and provided suggestions on how to manage such risks through public discourse and involvement of affected competency groups.
## References
- Waring TM, Wood ZT, Szathmáry E. 2023 Characteristic processes of human evolution caused the Anthropocene and may obstruct its global solutions. Phil. Trans. R. Soc. B 379: 20220259. https://doi.org/10.1098/rstb.2022.0259
- Kondor D, Hafez V, Shankar S, Wazir R, Karimi F. 2024 Complex systems perspective in assessing risks in artificial intelligence. Phil. Trans. R. Soc. A 382: 20240109. https://doi.org/10.1098/rsta.2024.0109
- https://seshat-db.com/
With the last episode in 2024, I dare to release an solo episode, summarizing my christmas research on the topics of
- Small Language models
- Agentic Systems
- Advanced Reasoning / Test time compute paradigm
I hope you find it interesting and useful!
All the best for 2025!
## AAIP Community
Join our discord server and ask guest directly or discuss related topics with the community.
https://discord.gg/5Pj446VKNU
## TOC
00:00:05 Intro
00:01:52 Part 1 - Small Language Models
00:20:16 Part 2 - Agentic Systems
00:36:16 Part 3 - Advanced Reasoning
00:58:08 Outro
## References
- Testing Qwen2.5 - https://huggingface.co/spaces/Qwen/Qwen2.5
- Qwen2.5 Technical report - https://arxiv.org/pdf/2412.15115
- Agents: https://www.superannotate.com/blog/llm-agents
- Scaling Test-time compute: https://arxiv.org/html/2408.03314v1
- Test time compute: https://huggingface.co/spaces/HuggingFaceH4/blogpost-scaling-test-time-compute
- O3 achieving 88% on ARC-AGI https://arcprize.org/blog/oai-o3-pub-breakthrough
- https://arxiv.org/html/2409.01374v1 - Human performance on ARC-AGI 76%
## Summary
Today we have as a guest Alexander Zehetmaier, co-founder of SunRise AI Solutions.
Alex will explain how SunRise AI is partnering with companies to navigate this challenging space, by providing their guidance, knowledge and network of experts to help companies apply AI successfully.
Alex will talk in detail about one of their Partners, Mein Dienstplan that is developing an Graph Neural Network based Solution that is generating complex work time tables. Scheduling a Timetable for a large number of employees and shifts is not an easy task, specially if one has to satisfy hard constraints like labor laws, and soft constraints like employee preferences.
Alex will explain in detail how they have developed a hybrid solution to use Graph Neural Network to create candidates that are validated and improved through heuristic based methods.
## AAIP Community
Join our discord server and ask guest directly or discuss related topics with the community.
https://discord.gg/5Pj446VKNU
## TOC
00:00:00 Beginning
00:02:23 Guest Introduction
00:04:19 SunRise AI Solutions
00:7:45 Mein Dienstplan
00:19:52 Building timetables with genAI
00:39:36 How SunRise AI can help startups
## References
Alexander Zehetmaier: https://www.linkedin.com/in/alexanderzehetmaier/
SunRise AI Solutions: https://www.sunriseai.solutions/
MeinDienstplan: https://www.meindienstplan.at/
As you surely know, OpenAI is not very open about how their systems works or how they build them. More importantly for most uses and business, OpenAI is agnostic about how users apply their services and how to make most out of the models multi-step "reasoning" capabilities .
As a stark contrast to OpenAI, today I am talking to Marius Dinu, the CEO and co-founder of the austrian startup extensity.ai. Extensity.ai as a company follows an open core model, building an open source framework which is the foundation for AI Agent systems that perform multi-step reasoning and problem solving, while generating revenue by providing enterprise support and custom implementation's.
Marius will explain how their Neuro-Symbolic AI Framework is combining the strengths of symbolic reasoning, like problem decomposition, explainability, correctness and efficiency with an LLM's understanding of natural language and their capability to operate on unstructured text following instructions.
We will discuss how their framework can be used to build complex multi-step reasoning workflows and how the framework works like an orchestrator and reasoning engine that applies LLM's as semantic parsers that at different decision points decide what tools or sub-systems to apply and use next. As well how in their research, they focus on ways to measure the quality and correctness of individual workflow steps in order to optimize workflow end-to-end and build a reliable, explainable and efficient problem solving system.
I hope you find this episode useful and interesting.
## AAIP Community
Join our discord server and ask guest directly or discuss related topics with the community.
https://discord.gg/5Pj446VKNU
## TOC
00:00:00 Beginning
00:03:31 Guest Introduction
00:08:32 Extensity.ai
00:17:38 Building a multi-step reasoning framework
00:26:05 Generic Problem Solver
00:48:41 How to ensure the quality of results?
01:04:47 Compare with OpenAI Strawberry
### References
Marius Dinu - https://www.linkedin.com/in/mariusconstantindinu/
https://www.extensity.ai/
Extensity.ai - https://www.extensity.ai/
Extensity.ai YT - https://www.youtube.com/@extensityAI
SymbolicAI Paper: https://arxiv.org/abs/2402.00854
Today on the podcast I have to pleasure to talk to Jules Salzinger, Computer Vision Researcher at the Vision & Automation Center of the AIT, the Austrian Institute of Technology.
Jules will share with us, his newest research on applying computer vision systems that analyze drone videos to perform remote plant phenotyping. This makes it possible to analyze plants growth, but as well how certain plant decease spreads within a field.
We will discuss how the diversity im biology and agriculture makes it challenging to build AI systems that generalize between plants, locations and time.
Jules will explain how in their latest research, they focus on performing experiments that provide insights on how to build effective AI systems for agriculture and how to apply them. All of this with the goal to build scalable AI system and to make their application not only possible but efficient and useful.
## TOC
00:00:00 Beginning
00:03:02 Guest Introduction
00:15:04 Supporting Agriculture with AI
00:22:56 Scalable Plant Phenotyping
00:37:33 Paper: TriNet
00:70:10 Major findings
### References
- Jules Salzinger: https://www.linkedin.com/in/jules-salzinger/
- VAC: https://www.ait.ac.at/en/about-the-ait/center/center-for-vision-automation-control
- https://www.ait.ac.at/en/about-the-ait/center/center-for-vision-automation-control
- AI in Agriculture: https://intellias.com/artificial-intelligence-in-agriculture/
- TriNet: Exploring More Affordable and Generalisable Remote Phenotyping with Explainable Deep Models: https://www.mdpi.com/2504-446X/8/8/407