Cloud PR Wire

Meshy Announces the Launch of Meshy-3, its Most Advanced 3D Modeling with Generative AI 

Meshy-3  brings enhanced text-to-3d and image-to-3d capabilities for gaming, film, and design industries.

Santa Clara, CA, USA — Meshy, a pioneer in applying generative AI technology to 3D modeling, proudly announces the release of Meshy-3, the latest version of its groundbreaking 3D generative AI platform. Meshy-3 not only significantly enhances its signature “Text to 3D” capabilities but also optimizes the Image to 3D pipeline, marking a major breakthrough in the field of 3D AI modeling.

Watch Meshy-3 in action on YouTube: https://www.youtube.com/watch?v=6tLVqm39dfg

“On the first anniversary of Meshy, we are thrilled to push the boundaries of 3D generative AI once again with Meshy-3,” said Ethan, co-founder and CEO of Meshy. “With its ultra-realistic Sculpture Mode, revolutionary PBR texture generation, and powerful Image to 3D conversion, Meshy-3 will provide creators and businesses in industries such as gaming, film, and design with unprecedented freedom and efficiency in creating 3D content.”

Meshy-3, the latest advancement from Meshy, introduces several key features that set new standards in the 3D generative AI field. One of the standout innovations is the Sculpture Mode, which employs high-poly model generation technology capable of creating 3D models with photogrammetry-level detail. This feature is ideal for next-gen games and epic film scenes, providing models with crisp normal maps that deliver stunning realism.

The platform also enhances its PBR Texture Generation. Meshy-3’s PBR maps are now equipped with separate metallic, roughness, and normal maps, allowing for precise material differentiation and enabling models to adapt seamlessly to various dynamic lighting conditions. This functionality can be accessed with just one click by selecting the “PBR” style in the “Text to 3D” feature, simplifying the process for users.

Further improvements include enhanced mesh and texture quality. Meshy-3 refines the details and accuracy of textures on generated models, pushing the limits of what 3D AI modeling can achieve. Additionally, Meshy-3 upgrades its Image to 3D capabilities, building on the robust Text to 3D features of its predecessors. This new function allows users to easily convert photos into detail-rich 3D models.

New features in Meshy-3 also include Smart Texture Healing, which helps users correct anomalies in AI-generated models, such as extra eyes or misplaced features. The Prompt Helper aids users in composing 3D generation prompts with ease using Meshy-3’s built-in prompt library, streamlining the creative process. Furthermore, the introduction of Community Badges in Meshy’s creator community adds an element of fun and engagement, allowing users to earn personalized badges by creating and sharing their work.

“Looking back on Meshy’s extraordinary journey since its inception in April 2023, we are incredibly proud of the progress we’ve made and the growth of our community,” said Ethan. “As we continue to push the limits of 3D generative AI, stay tuned for more updates from Meshy. We are committed to providing users with even more powerful and intuitive tools for creating 3D content, unleashing the unlimited potential of every idea.”

For more information about Meshy-3, please visit www.meshy.ai or follow Meshy on Twitter @MeshyAI.

About Meshy

Meshy is an innovative technology company based in Santa Clara, dedicated to developing cutting-edge 3D generative AI technology that helps creators and businesses in industries such as gaming, film, and design easily create high-quality 3D content. Since its founding in April 2023, Meshy has grown to become a leader in the field of 3D AI modeling, with its pioneering Text to 3D and Image to 3D solutions revolutionizing the way people create and use 3D content.


Media Contact

Company Name: Meshy

Contact Person: Ariel

Email: team@meshy.ai

Website: https://www.meshy.ai/

Post Disclaimer

Disclaimer: The views, suggestions, and opinions expressed here are the sole responsibility of the experts. No Infobeat Today journalist was involved in the writing and production of this article.