mardi 17 mai 2011

Navigator and different level of "on-the-fly" generation

The navigator is implemented, and thanks to the graph of the road network, I can grab the containers near the camera without searching in all the road segments list. It is quite fast and efficient.

The navigator grab containers in many circles with different radius. Each radius has a generation level assigned. So I created the Context Updater which is a thread running in background, and generate/degenerate the shapes of the grabbed containers in fonction of their distance to the camera. This thread updates each time the camera moves.

The Context Updater can be late or killed because of the time and memory necessary to generate shapes. For exemple, in the video above, I can't get into the city centers because there are too many shapes to compute (100mo needed for only one high building). I will have to optimise my shape structure to avoid that problem.

For now, the Context Updater seems to work fine. It does the same job than a LOD system, but while a LOD show and hide different detailled versions of a model, my thread creates the content from a seed in real time, and degenerates it when it's not needed anymore.




Organisation of shapes

For a long time now, I'm working on the optimisation. Not to get some more performance or more fps, of course, but to make the program work in the future modelisation complexity.

As we saw a few posts earlier, the road network generates ground divisions which are the initial shapes of all my city elements. Some rectangles representent sidewalks or street lanes, concave polygons are used to get crossing area, and generaly convex 4 cornered polygons are lots. Each shape store its own rule of modelisation, and each rule generates new shapes, assigning new rules. This system can grow up any kind of model by recursive call.

But in this architecture, all shapes have the same canvas and belong to a unique level of definition : "city element". This is not enougth for my needs cause I need to store data like building type, number of stages, etc. So I created shapes containers.

The containers are organised in a tree, like a hierarchical visualisation of the city elements. For exemple, lots contain one park and many buildings, buildings contain a number of stages, stages store their facades and interior design, facades link to border walls and wall tiles, etc. While modeling shapes, the tree of containers is generated on demand. For exemple, when a lot rule is applied, it creates a park and a footprint, which will both create their containers and declares associations.

The system is quite strange to manipulate, because there are some bilateral interdependances (shapes own container that own shapes) but now the framework is done, it is transparent and works fine. Every shape organizes itself in the parent container, or creates its own that is declared to the parent. Every connexions are automaticaly managed while modeling.

Now, I get containers that store all necessary seeds and data about what to generate, and I can grow or erase any generation level that I need for any container.

Next step :  get a navigator that find nearby containers.

jeudi 28 avril 2011

More rules, less bugs, new design !

Here are some visuals of the app after core modification to avoid generation of bugging polygons, and some few work on modelisation. My first video is also available on youtube (see below), showing the generator in action.



dimanche 17 avril 2011

Performances

I've done some tests of optimisation with JME3 to increase fps and low necessary memory usage. The first step was to merge the geometries together and it worked well, my framerate increase dramaticaly.

Their is still two problems : the quatity of materials will certainly growup with rules addition and JME can merge only objects with same material, so I will have to test this optimisation method with more realistic rules, and the memory is still overflown because of the number of geometries generated before optimisation, but JME is no culprit for that (as Normen said).

My little 2D extruded system have one big inconvenience : I need to make one mesh for each object before optimisation, because the scenegraph is the only one capable to move and rotate my extruded polygons. So I do many meshes, transform them, then merge them together as much as possible. It takes a long time and memory space. This is a large failure in the object architecture, because formaly, the view code is reponsible of the model modification.

I should get my own 3D prisms and transformation code in the model, so I can manage my objects and merge them as much as possible in one unique mesh, that should be given to the view. Whatever the scenegraph or rendering method I use, this is what I need to do.

Here is a pic to see JME optimisation method : it's not perfect (Ardor, for exemple, makes a global merge and store different materials in a unique packer) but it does the work : 800 000 triangles and 50 fps.


After somes tests and tries, I finaly get my way out and get a correct result with some hundred meters radius full generation and rendering. Of course, I will need to merge vertices and to use a LOD system, but for now, JME performances are quite good. 3 millions of traingles, 4 thousand objects and still 18fps. Thanks to the community for its help !

But my architectural problem still exists and I need to add a transformation layer in my model, and to rewrite some parts of the 3D framework to manage rotation and other stuff myself. Due to the big specialisation of my geometries, I think there are very good optimisaiton opportunities, so I now what to do now.


(This city made of  250 000 shapes occupy 600 mo in the memory : 50 for the model, 500 for the view, 50 for the control)

vendredi 15 avril 2011

More rules

Many of the neccessarly algorithm are almost working and I can now write some more rules to enhance the modelisation of my buildings. I can :
 - Select polygon edge by edge, comparing with front/back/left/right or N/S/E/W attributes, or even roadAccess/noRoadAccess
 - Split polygons into one or many widths, and get tiles for widows (for example)
 - Extrude edges of a polygon and merge touching polygons

These little algorithms, hard to implement for me and my poor math level, allow a lot of possibilities, as you can see.



Now I've some real troubles of performance, because my L-System generates a large number of meshes, and a very large number of invisible/useless faces. I need both optimise the rendering process and the architecture of my data if I still want to generate whole cities by mass modelisation.

Next step will be to change the scenograph, jMonkey seems not be designed to handle that kind of procedural generation (no good tools to optimize geometries)

Getting some virtual verticality

Now I need walls ! But... can't get the courageous to code a real 3D framework, because I know it's far from my coding level and would require some very long learning. I've imagined a trick to do 3D with extruded 2D only.

All shapes now have three rendering values : extrusion height, rotation angle and elevation. In fact, without these values, all shapes are only a montain of 2D polygons overlapping in all directions. It's in the view that the 2D polygons are well placed, with the help of the scenegraph transformation methods.
 - First, the polygon is triangulated and converted into a 3D mesh by extrusion in a simple parallel prism,
 - Second, the prism is moved at the correct (x,y) coordinates and elevated at the correct altitude,
 - Third, the prism rotated around a specified axis.
It's like the egytians building obelisks : first you extract the object from the ground, then you move it and rotate it to its final place.

In the rules class, it is much more easier to set my polygons than digging the stone. Only three params. And because an urban environnement is mostly made of horizontal and vertical shapes, I think this method have some potential. Of course, we will be able to put assets on the scene for more detailled objects that need complex meshes.


Adding rules...

Now, I just have to add rules to get a more realistic city and compute some details on the mass buildings. But to do that, I have to manipulate polygons with a lot of algoritm more complicated one eachothers.

The first algo that I implement is the offset/setback operator, that push somes edges of a polygon insde or outside, changing it's topology, to get new shapes of different forms. On these pics, you can see inside offset and basic L-Shape and U-Shapes giving building the width of a classic appartement. There are also some texturing rules, and a fence generation for houses. Rule creation is done in minutes.


The most important thing is that these algorithms must work on any polygon, in any situation because you never know what the road graph will compute for you. This is certainly the most difficult part of the job : no place for quick and dirty code! on a 20 000 buildings city, the special shitting case is immediatly visible.


At least, the L-System for Shape modeling

I now got a social map (invisible on the pics), a road graph of major and minor segments and a ground division.

Any division will be called initial shape, and will be the first data given to my new L-System.

The principle is quite simple : the procedural engine now get a list of shape, which store some geometry info and a single production rule. If the rule is not empty, it apply it to the shape that store it. The rule will transform this shape in one or more shapes, and assign next (or recursive) rules to the successor. Then, the procedural engine get these successors and so on.

This engine is not that easy to manage at the beginning, but it is very powerfull and have no limit. I can create as many rule as I need, and call in any modelisation path I decide.

Here is an exemple of the whole process from graph to divisions, envelopes and detailled shapes.



Making roads more organic

A friend told me every day that the pattern of the minor roads where too much right angled. I used to say "don't worry, it's only a graph matter and I can change it in fives minutes". But will it be that simple?

Five minutes later :


With this procedural architecture, I can work on any stage of the generation proccess, all changes cross the generation layers and give results as well. The tool seem to be very power full and any road pattern can be randomly computed without causing any exception.

Real Ground Division

before going further in the building modelisation, I must get some good bases for all my ground shapes, and this begin with the road graph. if it's quite easy to get the blocks limits by intersecting streets borders, it is another story to compute efficient shapes for all road elements.

And while computing some polygons on crossings and sidewalks, I can see some bugs hidding almost everywhere, because of errors in the graph generation... It's time to make a big backward step to fix the whole generation process.

The main probleme with the street shape generation is to get orthogonal road end to apply some good rectangular texture in the future, and avoid sidewalk overlapping. I first compute all the crossing polygons, then create the streets quads and finaly, extrude the sidewalks.

Implicitly, the blocks appear but must be polygonised too for the division algorithm to work.

Note that the city generation is done in 2 or 3 seconds, plus 5 seconds of rendering geometry by geometry. The FPS is low because there is no optimisation.


Buildings?

Because I don't realy like to wait, I do things in the bad order. Like buildings before roads, like envelopes before models. Here are some visuals of the lots populated with some kinf of building envelope with random blue color.

Looks great to me, give me the motivation to go further, or to get back to important previous steps.

Ground divisions

Each road segment store the width of the street. With the graph and this only one data, plus some days of coding, we get the ground division of the city.

Here is the city after interscting the streets limit to get blocks, and recursively divide the blocks to have some lots.


Road Network

After some experimentations mostly done tolearn how the JME scenograph works, I can tell it works well, and very easily. In no time, I'm ready to focus on my algorithm and it is a good thing : the procedural generation algorithm are not that simple.

The first step is the road network generation, wich is done with a simple non-oriented graph of road segments and road nodes (crossings). The generation of the graph is computed by a L-System that call two agent to manage production rules :
 - The goals manager is a creative agent : it create one to three road segment from a source segment, depending on its sociostatistical situation (near or far a city center, respecting a general road pattern, lost in campaign...)
 - The constraints manager is a restricive agent : it checks the generated segments and correct them if, for exemple, they fall in obstacles or water. It is also self-sensitve and connect roads when they get to much close.

This last class fits in 30 lines, but has been the most difficult to stabilize, since there are tons of different and hard to predicate cases.

You can see major road in white, minor roads in green, and two general paterns in action (raster "rectilinear" on the right, and free "organic" on the left). At this point, we've got more bugs than things working, but it is just the beginning !

First steps

On february 2011, I decided to experiment some procedural generation of graphical content and dive into the city generation with the helpfully papers of Pascal Muller, fundator of Procedural, the compagnie that develop the City Engine app.

I code everything in Java under Eclipse 3.6 and I use jMonkey Engine 3 for the rendering and control. On this blog, I'll show some screen shots at different stages of the progress, and try to give some explanations.