68. Markus Tretzmüller - Cortecs - Europäische LLM Infrastruktur Unabhängigkeit
Seit Anfang des Jahres gibt es in Europa einen starkes politisches Verlangen, sich von den USA unabhängig zu machen. Dies betrifft nicht nur die aktuelle militärische Abhängigkeit, sondern auch die Abhängigkeit von US tech Unternehmen. Besonders interessant für den AAIP ist natürlich die starke abhängigkeit Europas von Amerikanischen und Chinesischen KI Modellen und der Computer Infrastruktur um diese Modelle zu nützen.Heute auf dem Podcast spreche ich mit Markus Tretzmüller, der Mitbegründer von Cortecs. Einem Österreichischen Unternehmen das es sich zum Ziel gesetzt hat, mittels eines Sky Computing Ansatzes, eine Routing Lösung zu entwickeln die es Europäischen Unternehmen ermöglicht lokale Cloud Anbieter für KI Anwendungen zu nützen. Diese ermöglicht es KI Lösungen zu entwickeln, die im Europäischen Rechtsraum operieren ohne auf die Vorteile von Hyperscalern wie Kosteneffizienz und Ausfallsicherheit verzichten zu müssen.Im Interview erzählt Markus warum es nicht reicht auf Europäische Neiderlassungen von US Unternehmen zu setzen um Unabhängigkeit und Datensicherheit zu gewährleisten, und welche Vorteile eine routing Lösung wie Cortecs bringen kann.Viel spass und spannendes zuhören.## Referenzen- Cortecs: https://cortecs.ai/ - Building Your Sovereign AI Future- Sky computing: https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s02-stoica.pdf- RouteLLM: https://arxiv.org/abs/2406.18665- FrugalGPT - https://arxiv.org/abs/2305.05176
--------
42:39
--------
42:39
67. Mathias Neumayer and Dima Rubanov - Lora a child friendly AI
## SummaryLarge Language Models have many strengths and the frontier of what is possible and what they can be used for, is pushed back on the daily bases. One area in which current LLM's need to improve is how they communicate with children. Todays guests, Mathias Neumayer and Dima Rubanov are here to do exactly that, with their newest product LORA - a child friendly AI.Through they existing product Oscar stories, they identified issues with age appropriate language and gender bias in current LLMS's. With Lora, they are building their own AI friendly solution by fine tuning state of the art LLMs with expert curated data that ensures Lora is generating the appropriate language for children of a specific age.On the show they will describe how they are building Lora and what they plan to do with it.### References- https://oscarstories.com/- GenBit Score: [https://www.microsoft.com/en-us/research/wp-content/uploads/2021/10/MSJAR_Genbit_Final_Version-616fd3a073758.pdf](https://www.microsoft.com/en-us/research/wp-content/uploads/2021/10/MSJAR_Genbit_Final_Version-616fd3a073758.pdf)- Counterfactual Reasoning for Bias Evaluation: [https://arxiv.org/abs/2302.08204](https://arxiv.org/abs/2302.08204)
--------
53:10
--------
53:10
66. Taylor Peer - Beat Shaper - A music producers AI Copilot
Today on the show I have the pleasure to talk to returning guest, Taylor Peer one of the co-founders of the startup, behind Beat Shaper.Taylor will explain how they are following an Bottom-up approach to create electronic music, giving producers, fine grained control to create individual music instruments and beat patterns. For this, Beat Shaper is combining Variational Auto-encoders and Transformers. The VAE is used to create high dimensional embeddings that represent the users preferences that are used to guide the autoregressive generation process of the Transformer. The token sequence generated with the transformer is a custom developed symbolic music notation that can be decoded into individual instruments. We discuss in detail the system architecture and training process. Taylor is explaining in depth how they build such a system, and how they have been creating their own synthetic training dataset that contains music in symbolic notation that enables the fine grained control over the generated music.I hope you like this episode, and find it useful.### Referencesbeatshaper.ai - Beatshaper an Copilot for Musics Producershttps://openai.com/index/musenet/ - OpenAI MuseNetPlease create a funny looking comic image, showing a panda with glasses that is very busy creating music on a computer.
--------
52:20
--------
52:20
65. Daniel Kondor - CSH - The long term impact of AI on society
Guest in this episode is the Computational Social Scientist Daniel Kondor, Postdoc at the Complexity Science Hub in Vienna.Daniel is talking about research methods that make it possible to study the impact of various factors like technological development on societies; and in particular their rise or fall, over long periods of time. He explain how modern tools from computational social science, like agent based modelling can be used to study past and future social groups. We talk about his most recent publication that takes a complex systems perspective on the risk AI poses for society and provided suggestions on how to manage such risks through public discourse and involvement of affected competency groups.## References- Waring TM, Wood ZT, Szathmáry E. 2023 Characteristic processes of human evolution caused the Anthropocene and may obstruct its global solutions. Phil. Trans. R. Soc. B 379: 20220259. https://doi.org/10.1098/rstb.2022.0259- Kondor D, Hafez V, Shankar S, Wazir R, Karimi F. 2024 Complex systems perspective in assessing risks in artificial intelligence. Phil. Trans. R. Soc. A 382: 20240109. https://doi.org/10.1098/rsta.2024.0109- https://seshat-db.com/
--------
1:04:50
--------
1:04:50
64. Solo - Manuel Pasieka on the hottest LLM topics of 2024
With the last episode in 2024, I dare to release an solo episode, summarizing my christmas research on the topics of
- Small Language models
- Agentic Systems
- Advanced Reasoning / Test time compute paradigm
I hope you find it interesting and useful!
All the best for 2025!
## AAIP Community
Join our discord server and ask guest directly or discuss related topics with the community.
https://discord.gg/5Pj446VKNU
## TOC
00:00:05 Intro
00:01:52 Part 1 - Small Language Models
00:20:16 Part 2 - Agentic Systems
00:36:16 Part 3 - Advanced Reasoning
00:58:08 Outro
## References
- Testing Qwen2.5 - https://huggingface.co/spaces/Qwen/Qwen2.5
- Qwen2.5 Technical report - https://arxiv.org/pdf/2412.15115
- Agents: https://www.superannotate.com/blog/llm-agents
- Scaling Test-time compute: https://arxiv.org/html/2408.03314v1
- Test time compute: https://huggingface.co/spaces/HuggingFaceH4/blogpost-scaling-test-time-compute
- O3 achieving 88% on ARC-AGI https://arcprize.org/blog/oai-o3-pub-breakthrough
- https://arxiv.org/html/2409.01374v1 - Human performance on ARC-AGI 76%