Of all the big tech events, Siggraph is the grandest one for the graphics industry. Siggraph is where researchers, architects, and everyone in the graphics industry get the big stage to show the world their latest and best stuff. It is the Met Gala for all graphics companies and enthusiasts, and this year, Meta and Nvidia came in style.

More: Nvidia Became the Third Most Valuable U.S. Company Thanks to AI Developments

The two giant tech companies came to Siggraph with many exciting announcements at the graphics industry’s premier conference, which has everybody buzzing. 

The USD file format created by Pixar made a comeback at Siggraph. Nvidia has finally tapped into the raw power of its open-source version, OpenUSD. The GPU pioneer is already using OpenUSD and more of its ISV partners to reinvent how partners collaborate and share ideas and projects. They have also adopted it into their Nvidia Omniverse cloud platform.

Nvidia also announced a series of NIM microservices for generative AI to create agents and copilots within USD workflows. NIMs already help developers reduce time to market, but these microservices take the help one step further, especially for those using USD. Most of them are designed to improve user experience for USD and enhance applications, especially USD search. 

A USD Layout NIM and USD SmartMaterial NIM were among the highlights of Nvidia’s latest offerings. These two offerings allow users to create OpenUSD-based scenes from simple text prompts and apply realistic material to different CAD objects. It’s an incredible simplification of 3D tools. 

fVDBs, the volumetric data blocks, were also part of Nvidia’s highlights. Nvidia’s new fVDB MeshGeneration NIM is set to take 3D environment creation to the next level. These blocks, built on the OpenVDB library, promise faster and more expansive simulations that are perfect for many industries.  

Mark Zuckerberg’s Meta also had some exciting things to show at Siggraph. The company has added a new model—SAM 2—to its recent spree of new open-source AI models. SAM 2 outclasses the original SAM (segment anything model) because its speed can now support video and still images

SAM 2 allows real-time tracking of objects, regardless of the type of content, making it extremely useful across different industries. Although Meta claims SAM 2 is currently the best in its business of object segmentation in videos and images, you can try it out for yourself through the demo version on Meta’s site.

Introducing SAM 2: The next generation of Meta Segment Anything Model for videos and images
by u/dieselreboot in singularity

But that’s not all from Meta. An SA-V dataset for training general-purpose segmentation models from open-world videos was also announced. It works for a wide range of items, from locations and objects to entire real-world scenes. The SA-V comes packed with 51,000 different videos and 643,000 spatiotemporal segmentation masks, which will make users happy. 

Related: Meta to End NFT Initiatives on Facebook and Instagram

In short, SIGGRAPH 2024 made it clear that Meta and NVIDIA are not just leading the graphics industry—they’re shaping its future. With innovative and practical tools, they’re setting the stage for what’s next in AI and 3D graphics, and it’s exciting to see where they’ll take us next.

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice of this sort, HODL FM strongly recommends contacting a qualified industry professional.