fixprix 11 hours ago

I recently got into creating avatars for VR and have used AI to learn Unity/Blender so ridiculously fast, like just a couple weeks I've been at it now. All the major models can answer basically any question. I can paste in screenshots of what I'm working on and questions and it will tell me step by step what to do. I'll ask it what particular settings mean, there are so many settings in 3d programs; it'll explain them all and suggest defaults. You can literally give Gemini UV maps and it'll generate textures for you, or this for 3d models. It feels like the jump before/after stack overflow.

The game Myst is all about this magical writing script that allowed people to write entire worlds in books. That's where it feels like this is all going. Unity/Blender/Photoshop/etc.. is ripe for putting a LLM over the entire UI and exposing the APIs to it.

  • ForTheKidz 6 hours ago

    > The game Myst is all about this magical writing script that allowed people to write entire worlds in books. That's where it feels like this is all going. Unity/Blender/Photoshop/etc.. is ripe for putting a LLM over the entire UI and exposing the APIs to it.

    This is probably the first pitch for using AI as leverage that's actually connected with me. I don't want to write my own movie (sounds fucking miserable), but I do want to watch yours!

    • iaw 3 hours ago

      I have this system 80% done for novels on my machine at home.

      It is terrifyingly good at writing. I expected Freshmen college level but it's actually close to professional in terms of prose.

      The plan is maybe transition into children's books then children shows made with AI catered to a particular child at a particular phase of development (Bluey talks to your kid about making sure to pick up their toys)

      • thisisnotauser 2 hours ago

        I think there's a big question in there about AI that breaks a lot of my preexisting worldviews about how economics works: if anyone can do this at home, who are you going to sell it to?

        Maybe today only a few people can do this, but five years from now? Ten? What sucker would pay for any TV shows or books or video games or anything if there's a comfy UI workflow or whatever I can download for free to make my own?

        • CamperBob2 an hour ago

          What sucker would pay for any TV shows or books or video games or anything if there's a comfy UI workflow or whatever I can download for free to make my own?

          I think it's about time the industry faced that risk. They have it coming in spades.

          For example, LOST wouldn't have been such a galactic waste of time if I could have asked an AI to rewrite the last half of the series. Current-generation AI is almost sufficient to do a better job than the actual writers, as far as the screenplay itself is concerned, and eventually the technology will be able to render what it writes.

          Call it... severance.

  • tempaccount420 3 hours ago

    > Unity/Blender/Photoshop/etc.. is ripe for putting a LLM over the entire UI and exposing the APIs to it.

    This is what Windows Copilot should have been!

  • mclau156 3 hours ago

    I have never seen knowledge to be the limiting factor in success in the 3D world, its usually lots of dedicated time to model, rig, and animate

    • iamjackg 3 hours ago

      It's often the limiting factor to getting started, though. Idiosyncratic interfaces and control methods make it really tedious to start learning from scratch.

  • anonzzzies 10 hours ago

    You tried sharing your screen with Gemini intead of screenshots? I found it sometimes is really brilliant and sometimes terrible. It's mostly a win really.

  • baq 9 hours ago

    Look up blender and unity MCP videos. It’s working today.

    • fixprix 2 hours ago

      Watching a video on it now, thanks!

sruc 9 hours ago

Nice model, but strange license. You are not allowed to use it in EU, UK, and South Korea.

“Territory” shall mean the worldwide territory, excluding the territory of the European Union, United Kingdom and South Korea.

You agree not to use Tencent Hunyuan 3D 2.0 or Model Derivatives: 1. Outside the Territory;

  • johaugum 8 hours ago

    Meta’s Llama models (and likely many others') have similar restrictions.

    Since they don’t fully comply with EU AI regulations, Meta preemptively disallows their use in those regions to avoid legal complications:

    “With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models”

    https://github.com/meta-llama/llama-models/blob/main/models/...

  • ForTheKidz 6 hours ago

    Probably for domestic protection more than face value. Western licenses certainly have similar clauses to protect against liability for sanction violations. It's not like they can actually do much to prevent the EU from gaining from it.

    North Korea? Maybe. Uk? Who gives a shit

  • littlestymaar 8 hours ago

    This is merely a “we don't take responsibility if this somehow violates EU rules around AI”, it's not something they can enforce in any way.

    But even as such a strategy, I don't think that would hold if the Commission decided to fine Tencent for releasing that in case it violated the regulation.

    IMHO it's just the lawyers doing something to please the boss who asked them to “solve the problem” (which they can't, really).

  • justlikereddit 5 hours ago

    Because the EU regulations on AI and much else can be summarized as

    >"We're going to bleed you dry you through lawfare taxation, not IF, we're going to fucking do it!!!!"

    The UK? The only reason they don't publicly execute people for social media thought crime is that they abolished capital punishment.

    The west is going to hell

Y_Y 7 hours ago

How are they extracting value here? Is this just space-race-4-turbo propagandising?

I see plenty of GitHub sites that are barely more than advertising, where some company tries to foss-wash their crapware, or tries to build a little text-colouring library that burrows into big projects as a sleeper dependency. But this isn't that.

What's the long game for these companies?

awongh 5 hours ago

What's the best img2mesh model out there right now, regardless of processing requirements?

Are any of them better or worse with mesh cleanliness? Thinking in terms of 3d printing....

  • MITSardine 5 hours ago

    From what I could tell of the Git repo (2min skimming), their model is generating a point cloud, and they're then applying non-ML meshing methods on that (marching cubes) to generate a surface mesh. So you could plug any point-cloud-to-surface-mesh software in there.

    I wondered initially how they managed to produce valid meshes robustly, but the answer is not to produce a mesh, which I think is wise!

quitit 6 hours ago

Running my usual img2mesh tests on this.

1. It does a pretty good job, definitely a steady improvement

2. The demos are quite generous versus my own testing, however this type of cherry-picking isn't unusual.

3. The mesh is reasonably clean. There are still some areas of total mayhem (but these are easy to fix in clary modelling software.)

lwansbrough 8 hours ago

How long before we start getting these rigged using AI too? I’ve seen a few of these 3D models so far but none that do rigging.

leshokunin 14 hours ago

Can we see meshes, exports in common apps as examples?

This looks better than the other one on the front page rn

  • dvrp 13 hours ago

    Agree. That's why I posted it; I was surprised people were sleeping on this. But it's because they posted something yesterday and so the link dedup logic ignored this. This is why I linked to the commit instead.

    There are meshes examples on the Github. I'll toy around with it.

  • llm_nerd 4 hours ago

    Generate some of your own meshes and drop them in Blender.

    https://huggingface.co/spaces/tencent/Hunyuan3D-2

    The meshes are very face-rich, and unfortunately do not reduce well in any current tool [1]. A skilled Blender user can quickly generate better meshes with a small fraction of the vertices. However if you don't care about that, or if you're just using it for brainstorming starter models it can be super useful.

    [1] A massive improvement in the space will be AI or algorithmic tools which can decimate models better than the current crop. Often thousands of vertices can be reduced to a fraction with no appreciable impact in quality, but current tools can't do this.

amelius 6 hours ago

I don't understand why it is necessary to make it this fast.

  • Philpax 5 hours ago

    It helps with iteration - you can try out different concepts and variations quickly without having to wait, especially as you refine what you want and your understanding of what it's capable of.

    Also, in general, why not?

    • amelius 5 hours ago

      > Also, in general, why not?

      There are various reasons:

      - Premature optimization will take away flexibility, and will thus affect your ability to change the code later.

      - If you add features later that will affect performance, then since the users are used to the high performance, they might think your code is slow.

      - There are always a thousands things to work on, so why spend effort on things that users, at this point, don't care much about?

      • TeMPOraL 5 hours ago

        Being this fast is not a "premature optimization", it's a qualitatively different product category. ~immediate feedback vs. long wait time enables entirely different kinds of working.

        Also:

        > since the users are used to the high performance, they might think your code is slow.

        I wouldn't worry about it in general - almost all software is ridiculously slow for the little it can do, and for the performance of machines it runs on, and it still gets used. Users have little choice anyway.

        In this specific case, if speed is makes it into a different product, then losing that speed makes the new thing... a different product.

        > There are always a thousands things to work on, so why spend effort on things that users, at this point, don't care much about?

        It's R&D work, and it's not like they're selling it. Optimizing for speed and low resource usage is actually a good way to stop the big players from building moats around the technology, and to me, that seems like a big win for humanity.

      • andybak 5 hours ago

        > users, at this point, don't care much about?

        What makes you think this is true?

      • llm_nerd 4 hours ago

        They released the original "slow" version several months ago. After understanding the problem space better they can now release the much, much faster variant. That is the complete opposite of premature optimization.

        Yes, of course people care about performance. Generating the mesh on a 3060 took 110+ seconds before, and now is about 1 second. And on early tests the quality is largely the same. I'd rather wait 1 second than 110 seconds, wouldn't you? And obviously this has an enormous impact on the financials of operating this as a service.

  • bufferoverflow an hour ago

    Fast is always better than slow, if the quality isn't worse.

dvrp 15 hours ago
  • Flux159 14 hours ago

    I think the link should be updated to this since it's currently just pointing to a git commit.

    • dvrp 8 hours ago

      Reason for that is because of the dedup filter was thinking that this release is the same as the one that happened yesterday. Besides, the Flash release is only one of many.

boppo1 14 hours ago

Can it run on a 4080 but slower, or is the vram a limitation?

  • dvrp 13 hours ago

    They don't mention that and I don't have one — can you try for yourself and let us know? I think you can get it from Huggingface or GH @ https://github.com/Tencent/Hunyuan3D-2

    • fancyfredbot 9 hours ago

      They mention "It takes 6 GB VRAM for shape generation and 24.5 GB for shape and texture generation in total."

      So based on this your 4080 can do shape but not texture generation.

      • boppo1 7 hours ago

        Nnice, that's all i needed anyway.

  • llm_nerd 4 hours ago

    It can run on a 4080 if you divide and conquer. I just ran a set on my 3060 (12GB), although I have my own script which does each step separately as each stage uses from 6 - 12GB of VRAM.

    -loads the diffusion model to go from text to an image and then generate a varied series of images based upon my text. One of the most powerful features of this tool, in my opinion, is text to mesh, and to do this it uses a variant of Stable Diffusion to create 2D images as a starting point, then returning to the image to mesh pipeline. If you already have an image this part obviously isn't necessary.

    -frees the diffusion model from memory.

    Then for each image I-

    -load the image to mesh model, which takes approximately 12GB of VRAM. Generate a mesh

    -free the image to mesh model

    -load the mesh + image to textured mesh model. Texture the mesh

    -free the mesh + image to textured mesh model

    It adds a lot of I/O between each stage, but with super fast SSDs it just isn't a big problem.

    • llm_nerd 4 hours ago

      Just as one humorous aside, if you use the text to mesh pipeline, as mentioned the first stage is simply a call to a presumably fine-tuned variant of stable diffusion with your text and the following prompts (translated from Simplified Chinese)-

      Positive: "White background, 3D style, best quality"

      Negative: "text, closeup, cropped, out of frame, worst quality, low quality, JPEG artifacts, PGLY, duplicate, morbid, mutilated, extra fingers, mutated hands, bad hands, bad face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck"

      Thought that was funny.

  • thot_experiment 11 hours ago

    almost certainly, i haven't tried the most recent models but i have used hy3d2 and hy3d2-fast a lot and they're quite light to inference. You're gonna spend more time decoding the latent than you will inferencing. Takes about 6gb vram on my machine, I can't imagine these will be heavier.

coolius 4 hours ago

has anyone tried to run this on apple silicon yet?

  • postalrat 2 hours ago

    That would be revolutionary.