Partner im RedaktionsNetzwerk Deutschland
PodcastsTechnologieTalking Papers Podcast
Höre Talking Papers Podcast in der App.
Höre Talking Papers Podcast in der App.
(16.085)(9.339)
Sender speichern
Wecker
Sleeptimer

Talking Papers Podcast

Podcast Talking Papers Podcast
Itzik Ben-Shabat
🎙️ Welcome to the Talking Papers Podcast: Where Research Meets Conversation 🌟Are you ready to explore the fascinating world of cutting-edge research in comput...

Verfügbare Folgen

5 von 35
  • 3D Paintbrush - Dale Decatur
    🎙️ Welcome to the latest episode of the Talking Papers Podcast! In this exciting installment, I had the pleasure of hosting Dale Decatur, a talented 3rd year PhD student from the University of Chicago's 3DL lab, where he studies computer graphics, 3D computer vision, and deep learning. 📄 In this episode, we delved into Dale's groundbreaking paper titled "3D Paintbrush: Local Stylization of 3D Shapes with Cascaded Score Distillation," which was recently published in CVPR 2024. The paper introduces a novel technique, 3D Paintbrush, that enables automatic texturing of local semantic regions on meshes through text descriptions. By creating texture maps that seamlessly integrate into standard graphics pipelines, Dale's method not only streamlines the texturing process but also enhances the quality of localization and stylization.🌟 The innovative Cascaded Score Distillation (CSD) technique developed in this paper leverages multiple stages of a cascaded diffusion model to supervise local editing with generative priors learned from images at varying resolutions. This approach grants users control over both granularity and global understanding of the editing process, opening up new possibilities for simplifying the editing of 3D assets.💡 My insights: This paper marks a significant advancement in democratizing 3D asset editing by leveraging text prompts, a trend gaining traction in the research community. The meticulous approach taken by Dale and his collaborators sets a new standard for local editing and paves the way for more accessible content creation in the 3D space.🔍 Dale's journey to developing 3D Paintbrush is truly inspiring. Our paths first crossed at CVPR 2023 when he presented his 3D highlighter paper. Despite not featuring it on the podcast back then, our mutual acquaintance, Itai Land, reintroduced us at CVPR 2024, showcasing Dale's remarkable progress with 3D Paintbrush. It was evident that having Dale join us on the podcast was a must, and I'm thrilled to share his insights with our audience.🔗 Don't miss out on this enlightening discussion about the future of 3D asset editing! Subscribe to the Talking Papers Podcast for more captivating conversations with emerging academics and PhD students. Let me know your thoughts in the comments below! Thanks for tuning in and stay tuned for more groundbreaking research discussions! 🚀All links and resources are available in the blogpost: https://www.itzikbs.com/3dpaintbrush🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com 📧Subscribe to our mailing list: http://eepurl.com/hRznqb 🐦Follow us on Twitter: https://twitter.com/talking_papers 🎥YouTube Channel: https://bit.ly/3eQOgwP
    --------  
    57:42
  • 3DInAction - Yizhak Ben-Shabat
    🎙️ **Unveiling 3DInAction with Yizhak Ben-Shabat | Talking Papers Podcast** 🎙️📚 *Title:* 3DInAction: Understanding Human Actions in 3D Point Clouds  📅 *Published In:* CVPR 2024  👤 *Guest:* Yizhak (Itzik) Ben-ShabatWelcome back to another exciting episode of the Talking Papers Podcast, where we bring you the latest breakthroughs in academic research directly from early career academics and PhD students! This week, we have the pleasure of hosting Itzik Ben-Shabat to discuss his groundbreaking paper *3DInAction: Understanding Human Actions in 3D Point Clouds*, published in CVPR 2024 as a highlight.In this episode, we delve into a novel method for 3D point cloud action recognition. Itzik explains how this innovative pipeline addresses the major limitations of point cloud data, such as lack of structure, permutation invariance, and varying number of points. With patches moving in time (t-patches) and a hierarchical architecture, 3DInAction significantly enhances spatio-temporal representation learning, achieving superior performance on datasets like DFAUST and IKEA ASM.   **Main Contributions:**    1. Introduction of the 3DInAction pipeline for 3D point cloud action recognition.  2. Detailed explanation of t-patches as a key building block.  3. Presentation of a hierarchical architecture for improved spatio-temporal representations.  4. Demonstration of enhanced performance on existing benchmarks.**Host Insights:** Given my involvement in the project, I can share that when I embarked on this journey, there were only a handful of studies tackling the intricate task of 3D action recognition from point cloud data. Today, this has burgeoned into an active and evolving field of research, showing just how pivotal and timely this work is.**Anecdotes and Behind the Scenes:** The title "3DInAction" signifies the culmination of three years of passionate research coinciding with my fellowship's theme. This episode is unique as it's hosted by an AI avatar created by Synthesia—Itzik was looking for an exciting way to share this story using the latest technology. While there is no sponsorship, the use of AI avatars adds an innovative twist to our discussion. Don't miss this intellectually stimulating conversation with Itzik Ben-Shabat. Be sure to leave your thoughts and questions in the comments section below—we’d love to hear from you! And if you haven't already, hit that subscribe button to stay updated with our latest episodes.🔗 **Links and References:**- Watch the full episode: [Podcast Link]- Read the full paper: [Paper Link]📢 **Engage with Us:**- What are your thoughts on 3D point cloud action recognition? Drop a comment below!- Don’t forget to like, subscribe, and hit the notification bell for more insightful episodes!Join us in pushing the boundaries of what's possible in research and technology!--- Ready to be part of this journey? Click play and let’s dive deep into the world of 3D action recognition! 🚀All links and resources are available in the blogpost: https://www.itzikbs.com/3dinactionNote that the host of this episode is not a real person. It is an AI generated avatar and everything she said in the episode was fully scripted.🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com 📧Subscribe to our mailing list: http://eepurl.com/hRznqb 🐦Follow us on Twitter: https://twitter.com/talking_papers 🎥YouTube Channel: https://bit.ly/3eQOgwP
    --------  
    30:55
  • Cameras as Rays - Jason Y. Zhang
     Talking Papers Podcast Episode: "Cameras as Rays: Pose Estimation via Ray Diffusion" with Jason ZhangWelcome to the latest episode of the Talking Papers Podcast! This week's guest is Jason Zhang, a PhD student at the Robotics Institute at Carnegie Mellon University who joined us to discuss his paper, "Cameras as Rays: Pose Estimation via Ray Diffusion". The paper was published in the highly-respected conference ICLR, 2024.Jason's research hones in on the pivotal task of estimating camera poses for 3D reconstruction - a challenge made more complex with sparse views. His paper proposes an inventive and out-of-the-box representation that perceives camera poses as a bundle of rays. This innovative perspective makes a substantial impact on the issue at hand, demonstrating promising results even in the context of sparse views.What's particularly exciting is that his work, be it regression-based or diffusion-based, showcases top-notch performance on camera pose estimation on CO3D, and effectively generalizes to unseen object categories as well as captures in the wild. Throughout our conversation, Jason explained his insightful approach and how the denoising diffusion model and set-level transformers come into play to yield these impressive results. I found his technique a breath of fresh air in the field of camera pose estimation, notably in the formulation of both regression and diffusion models.  On a more personal note, Jason and I didn't know each other before this podcast, so it was fantastic learning about his journey from the Bay Area to Pittsburgh. His experiences truly enriched our discussion and coined one of our most memorable episodes yet. We hope you find this podcast as enlightening as we did creating it. If you enjoyed our chat, don't forget to subscribe for more thought-provoking discussions with early career academics and PhD students. Leave a comment below sharing your thoughts on Jason's paper! Until next time, keep following your curiosity and questioning the status quo.  #TalkingPapersPodcast #ICLR2024 #CameraPoseEstimation #3DReconstruction #RayDiffusion #PhDResearchers #AcademicResearch #CarnegieMellonUniversity #BayArea #PittsburghAll links and resources are available in the blogpost: https://www.itzikbs.com/cameras-as-rays🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com 📧Subscribe to our mailing list: http://eepurl.com/hRznqb 🐦Follow us on Twitter: https://twitter.com/talking_papers 🎥YouTube Channel: https://bit.ly/3eQOgwP
    --------  
    42:47
  • Instant3D - Jiahao Li
    Welcome to another exciting episode of the Talking Papers Podcast! In this episode, I had the pleasure of hosting Jiahao Li, a talented PhD student at Toyota Technological Institute at Chicago (TTIC), who discussed his groundbreaking research paper titled "Instant3D: Fast Text-to-3D with Sparse-View Generation and Large Reconstruction Model". This paper, published in ICLR 2024, introduces a novel method that revolutionizes text-to-3D generation.Instant3D addresses the limitations of existing methods by combining a two-stage approach. First, a fine-tuned 2D text-to-image diffusion model generates a set of four structured and consistent views from the given text prompt. Then, a transformer-based sparse-view reconstructor directly regresses the NeRF from the generated images. The results are stunning: high-quality and diverse 3D assets are produced within a mere 20 seconds, making it a hundred times faster than previous optimization-based methods.As a 3D enthusiast myself, I found the outcomes of Instant3D truly captivating, especially considering the short amount of time it takes to generate them. While it's unusual for a 3D person like me to experience these creations through a 2D projection, the astonishing results make it impossible to ignore the potential of this approach. This paper underscores the importance of obtaining more and better 3D data, paving the way for exciting advancements in the field.Let me share a little anecdote about our guest, Jiahao Li. We were initially introduced through Yicong Hong, another brilliant guest on our podcast. Yicong, who was a PhD student at ANU during my postdoc, and Jiahao interned together at Adobe while working on this very paper. Coincidentally, Yicong also happens to be a coauthor of Instant3D. It's incredible to see such brilliant minds coming together on groundbreaking research projects.Now, unfortunately, the model developed in this paper is not publicly available. However, given the computational resources required to train these advanced models and obvious copyright issues, it's understandable that Adobe has chosen to keep it proprietary. Not all of us have a hundred GPUs lying around, right?Remember to hit that subscribe button and join the conversation in the comments section. Let's delve into the exciting world of Instant3D with Jiahao Li on this episode of Talking Papers Podcast!#TalkingPapersPodcast #ICLR2024 #Instant3D #TextTo3D  #ResearchPapers #PhDStudents #AcademicResearchAll links and resources are available in the blogpost: https://www.itzikbs.com/instant3d🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com 📧Subscribe to our mailing list: http://eepurl.com/hRznqb 🐦Follow us on Twitter: https://twitter.com/talking_papers 🎥YouTube Channel: https://bit.ly/3eQOgwP
    --------  
    52:41
  • Variational Barycentric Coordinates - Ana Dodik
    In this exciting episode of #TalkingPapersPodcast, we have the pleasure of hosting Ana Dodik, a second-year PhD student at MIT. We delve into her research paper titled "Variational Barycentric Coordinates." Published in SIGGRAPH Asia, 2023, this paper significantly contributes to our understanding of the optimization of generalized barycentric coordinates. The paper introduces a robust variational technique that offers further control as opposed to existing models. Traditional practices are restrictive due to the representation of barycentric coordinates utilizing meshes or closed-form formulae. However, Dodik's research defies these limits by directly parameterizing the continuous function that maps any coordinate concerning a polytope's interior to its barycentric coordinates using a neural field. A profound theoretical characterization of barycentric coordinates is indeed the backbone of this innovation. This research demonstrates the versatility of the model by deploying variety of objective functions and also suggests a practical acceleration strategy.My take on this is rather profound: this tool can be very useful for artists. It sparks a thrill of anticipation of their feedback on its performance. Melding classical geometry processing methods with newer, Neural-X methods, this research stands as a testament to the significant advances in today's technology landscape.My talk with Ana was delightfully enriching. In a unique online setting, we discussed how the current times serve as the perfect opportunity to pursue a PhD. We owe that to improvements in technology.Remember to hit the subscribe button and leave a comment about your thoughts on Ana's research. We'd love to hear your insights and engage in discussions to further this fascinating discourse in academia.All links and resources are available in the blogpost: https://www.itzikbs.com/variational-barycentric-coordinates🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com 📧Subscribe to our mailing list: http://eepurl.com/hRznqb 🐦Follow us on Twitter: https://twitter.com/talking_papers 🎥YouTube Channel: https://bit.ly/3eQOgwP
    --------  
    41:03

Weitere Technologie Podcasts

Über Talking Papers Podcast

🎙️ Welcome to the Talking Papers Podcast: Where Research Meets Conversation 🌟Are you ready to explore the fascinating world of cutting-edge research in computer vision, machine learning, artificial intelligence, graphics, and beyond? Join us on this podcast by researchers, for researchers, as we venture into the heart of groundbreaking academic papers.At Talking Papers, we've reimagined the way research is shared. In each episode, we engage in insightful discussions with the main authors of academic papers, offering you a unique opportunity to dive deep into the minds behind the innovation.📚 Structure That Resembles a Paper 📝Just like a well-structured research paper, each episode takes you on a journey through the academic landscape. We provide a concise TL;DR (abstract) to set the stage, followed by a thorough exploration of related work, approach, results, conclusions, and a peek into future work.🔍 Peer Review Unveiled: "What Did Reviewer 2 Say?" 📢But that's not all! We bring you an exclusive bonus section where authors candidly share their experiences in the peer review process. Discover the insights, challenges, and triumphs behind the scenes of academic publishing.🚀 Join the Conversation 💬Whether you're a seasoned researcher or an enthusiast eager to explore the frontiers of knowledge, Talking Papers Podcast is your gateway to in-depth, engaging discussions with the experts shaping the future of technology and science.🎧 Tune In and Stay Informed 🌐Don't miss out on the latest in research and innovation. Subscribe and stay tuned for our enlightening episodes. Welcome to the future of research dissemination – welcome to Talking Papers Podcast! Enjoy the journey! 🌠 #TalkingPapersPodcast #ResearchDissemination #AcademicInsights
Podcast-Website

Hören Sie Talking Papers Podcast, kurz informiert by heise online und viele andere Podcasts aus aller Welt mit der radio.at-App

Hol dir die kostenlose radio.at App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen
Rechtliches
Social
v7.6.0 | © 2007-2025 radio.de GmbH
Generated: 2/5/2025 - 6:56:54 AM