My first Vray render with Depth of Field + Lens Distortion
I rather screwed up the glass ball, looks more like a bubble
Adding DOF slowed it down a ton, fortunately you can bake out the GI and play with the DOF separately.
+ Caustics... looks messed up.. hmm what have I done wrong
Uhm... I got no idea
Vray seems to have some weird bug where it gets stuck sometimes on the initial light cache and never finishes, just eats up all the ram. Very annoying.
I've swapped the ball for uhm.. a glass turd (A cone with a twirl deformer)
These renders all on the medium preset, so a bit course but they are rendering in like 10 minutes so can't complain.
And from another angle:
I have no idea how optimized or un-optimiszed Vray is for Mac right now as It's not feature complete in Cinema yet.
Heres my 3rd vray attempt took 3 minutes!
Don't ask me what happened to the colors, pilot error is highly possible.
Reason it took 3 minutes was I swapped out the White Sphere for an area light. Which makes it substantially faster.
Ok heres my second vray render attempt, took an Hour just the same oddly
Vray 2hr It's a bit smoother, more noticeable in the full size image than this small version
I get the feeling if I rendered for the same 4 hours of the maxwell render it would be very close to the same accuracy and would have 0 noise
(you can't see the noise in the maxwell one as It's zoomed out) But at the same time id take noise over glitchy dented looking GI any day.
Now to compare this to another renderer, Cinemas:
Cinema 4hr 25min, I think I had it setup a bit wrong, but even so... what a load of shit.
Someone elses comparison of Vray to Maxwell:
5 hours versus 16 hours
And the yellow light in the Vray render is due to the color mapping type, Maxwell employs a burnout effect in post which I'm sure you could do with vray/photoshop on a 32bit image.
So to summarize, Vray is awesome
Vray will let you render all the GI and save it as a file, then reuse it, meaning you can re-render with different Depth of Field, different exposure and there's even an option so you could render an animation if only the camera moves from 1 GI calculation. This means you can re-render using the same GI at any resolution so render small, then re-render at print size and it will only take 2 minutes. Maxwell gets exponentially slower the bigger you render.
-- update -- ok turns out I misunderstood the vray GI, you can't just render the GI once and use it in whole animation. It will only calculate the GI it knows you are going to see by looking ahead at places the camera goes to. And you can't use this method with Irradiance you can only use it with light cache mode which is very very slow and hard to use in my opinion. Making it somewhat useless really. But if your camera doesn't move you'll be fine.
It says light cache mode is better, which I guess it produces nice more maxwell like results but it seems so much slower and harder to setup/tell what the final will look like till the last moment.
Maxwell is better in that you can change individual light strengths after rendering and you can render for as long as you like, if your vray render looks shit after 4hours you have to start again. Maxwell can always resume and refine.
so vray = best for animation and large stills
maxwell = best for small stills
cinemas default renderer = uhm... useless, maybe best for basic motiongraphics where no GI is used.
You know what would be really handy, photos of models/objects that have depth information and/or normal directions
Depth information should suffice as from that alone you can generate a normal map and that gives you all kinds of power like relighting a photograph from different angle, shadow casting, adding reflections/specular. At the moment integrating a photo into a 3D comp either still or worse; in motion, is a real pain in the ass/impossible
2D Comp work in general just looks so flat when they try and integrate photographic elements.
I'm going to have a go at faking it, I imagine I will be largely unsuccessful, but I won't know till I try
check out ZbornToy if you want a better idea of what I\'m on about. In fact just check it out anyway, It's freaky
Heres the original photograph
Heres a real quick draft
It works better than I thought, It's far from perfect I only messed about with it for a few minutes. It definitely brings greater flexibility if you want to quickly integrate an images lighting, and you can adjust the lighting in realtime.
If it was done properly you could even change camera angle somewhat
More refined model (started from scratch using another method, It's still really basic)
Really It's amazing it works at all given how crude the model/displacement is... I\'m not even going to show the mesh as It's so bad
To be done properly/easy it should be done in either Modo 301 or Zbrush, or by someone more skilled at modelling than me
whatever way you look at it, It's a damn site faster/more powerful than hand painting in lighting changes, at least for organic shapes
Original (badly masked, just for a quick test) followed by my alterations
Again using a very rough model
I created a fluid simulation a while ago and have been trying to find a way to pick out select particles and trace them, this turned out more difficult than one would expect. Requiring me to learn a new programming language (Python) in order to create something in realflow to pick out particles and clone them into a new emitter allowing for export to Cinema 4D.
FInally finished the python code to do it and imported it into cinema for some rough test renders:
Just created something that will trace the particles within realflow, allowing it to be meshed in realflow:
Realflow's execution of Python is unbelievably slow to the point of being useless, that or I''m doing something really wrong
Dunno who to blame for glitches, I have weird glitches across ALL applications lately, maybe my GPU is dying