Issues with your account? Bug us in the Discord!

Game Design Theory

I am still bantering around some sim ideas, but I'm not sure I understand the underlying design the main engines use.

As I envision it, the physics model should be independent of the game and graphics engines. The program should be able to control the vessels through the same API's as the player. Also, barring resource constraints, the vessels should be able to be represented as either 3D-rendered objects, or blips on a radar screen. Changing the physical representation of the objects shouldn't change the game; for that matter; neither should changing the physics model.

I'm trying to apply some OOD discussions brought up by Allen Holub with respect to GUI design. He argues that in order for an object to be truly encapsulated, it's visual representation must also be controlled by the object. So, for a GUI, instead of obtaining a n attribute from an object, then displaying this attribute as a label or textfield, you should instead call the "display" method of the object, telling it "Display yourself here." Carrying this to a sim/game, for example, a Fury object would have attributes such as location, orientation, velocity, systems health, etc. Instead of obtaining the location and orientation attributes and using them to render the Fury, a controller object would ask a Fury to display itself, either in a view screen, a radar screen, a control panel, or some over viewer. Based on the viewer's type, the Fury would either represent itself as a grouping of polygons, a single dot, a series of controls, or what ever representation is best for that viewer type.

Does this thinking make sense, or am I complicating things too much?

If the above is on track, then the most simple sim would be the equivalent of a IFR (instrument flight rules) trainer, where there are no outside visual cues (re windows), only the instruments showing the necessary attributes (velocity, acceleration, orientation, relation to fixed points such as the station). I know that this wouldn't be the coolest sim, but it would allow for hard debugging of the world model without introducing complications from the graphics and game rules components. Another part of this phase would be to fine tune the controller options and interfaces, both for the human player and computer. Also, if networking was to be a factor, this would be the time to create and debug it.

The next step would most likely be to create an appealing graphics representation. This could start as such low-level things as stick figures in a 3D grid, then progress to more glamorous items such as fully textured meshes.

Finally, the game component would be created by adding in game rules, objectives, and AI. This sounds simple because frankly it is the part I least understand.

Does this any of this make sense?

------------------
bobo
<*>
B5:ITF

Comments

  • BigglesBiggles <font color=#AAFFAA>The Man Without a Face</font>
    The most important thing to keep in mind when laying out your game engine system is to make it modular (at least in my opinion). You have the graphics engine, you have the physics engine, you have the sound engine, you have the input system, etc. Organise the whole system around this modular layout.

    John should say something. He knows heaps about game engine design and OO.

    ------------------
    [url="http://www.minbari.co.uk/log12.2263/"]Never eat anything bigger than your own head.[/url]
    "Nonono...Is not [i]Great[/i] Machine. Is...[i]Not[/i]-so-Great Machine. It make good snow cone though." - Zathras
  • GrantNZGrantNZ Earthforce Officer
    Makes sense to me.

    My only suggestion would be to perhaps use some more layers of abstraction (which is really just building on Biggles' "modularity" comment). i.e. Rather than having a Fury render itself or plot itself on a radar, instead have it signal the radar that it is at XYZ. And have it request a Fury-shaped render from the rendering engine.

    That means you can change engines (perhaps from DirectX to OpenGL) without having to change all the Fury code. And radars can make up their own mind about how strong a blip should appear, in what style, etc.

    Hope that makes some sense.

    [img]http://216.15.145.59/mainforums/smile.gif[/img]
  • BigglesBiggles <font color=#AAFFAA>The Man Without a Face</font>
    Grant makes some very good points. You don't want to have drawing code infesting every class. You want it nicely kept in one place. Then you can change it easily.


    Now then, input systems. [img]http://216.15.145.59/mainforums/smile.gif[/img]
    Here are my current thoughts (in brief):
    You have an input device object, from which you can inherit to get things like a keyboard, a mouse, etc. How do you map what those input devices do back to do stuff in the game? My idea is to have a command map. You basically have a list of all the valid input values for each device, and next to each of these you place the command it performs upon getting a signal. You could possibly specify the type of signal as well (like on hold down, on release, etc).
    The other option could be to have a mirrored version. Have a big list of all commands available, and next to each note which input device and which input signal on that device will trigger it. This is like what you normally see in the options dialog for games.
    Now, the next thing is how to specify these commands. You could make each one a console command and when it is triggered, send it to your console system. But you might not have a console. Then you would need to code in how each command performs. There is also the question of GUI input, where you won't be using commands. So you'll need some way of getting raw input data, or some similar method of getting GUI input.

    ------------------
    [url="http://www.minbari.co.uk/log12.2263/"]Never eat anything bigger than your own head.[/url]
    "Nonono...Is not [i]Great[/i] Machine. Is...[i]Not[/i]-so-Great Machine. It make good snow cone though." - Zathras
  • GrantNZGrantNZ Earthforce Officer
    The input system is something I haven't thought about too much, as of yet. It presents some interesting communication difficulties though.

    My instinct is to design some sort of interface system. Things requesting input (player controlled ships, mouse pointers, whatever) could register a request for input from the interface - e.g. the mouse pointer could request x/y movement; a ship could request x/y movement as well as 15 keys for various functions. The interface system has the task of matching up physical controls (mice, joystick, buttons) with the requested capabilities. (This would of course be based on user preferences - the user might have already selected joystick for x/y movement of the ship.)

    The interface has no knowledge of what each command does. It simply provides either callback ("oie, ship, the fire button has been pressed") or polling ("the current x position is 497.8") capabilities.

    Game objects should of course release keys as they fall out of scope, just as they would release memory etc.

    There is also the possibility of temporary overrides, especially for GUI functions. If the user calls up a menu, create a mouse pointer and have it override mouse input until the menus are finished with. Text boxes can call for overridden alphanumeric keys.

    That [i]seems[/i] like a good modular system to me. Although like I say I haven't really thought too much about it yet.
  • BigglesBiggles <font color=#AAFFAA>The Man Without a Face</font>
    The problem I can see with that approach is that you'll never know exactly what inputs will be required, except maybe when the game is running. In this case, how would you do an options screen for configuring input devices?

    ------------------
    [url="http://www.minbari.co.uk/log12.2263/"]Never eat anything bigger than your own head.[/url]
    "Nonono...Is not [i]Great[/i] Machine. Is...[i]Not[/i]-so-Great Machine. It make good snow cone though." - Zathras
  • GrantNZGrantNZ Earthforce Officer
    True, it wouldn't be trivial. Maybe another level of abstraction is required in my system. (I love levels of abstraction [img]http://216.15.145.59/mainforums/biggrin.gif[/img] )

    Two interfaces. Input interface, which works as above. Game-input interface, which registers all needed commands in one lump, and dishes them out to game objects. How this would work I have [i]no[/i] idea [img]http://216.15.145.59/mainforums/smile.gif[/img]
  • bobobobo (A monkey)
    grrr.. my patterns book is at home.

    You need to use the same interface pattern as is used for menus. On one side, you have the control operations (increase thrust, turn left, fire weapon) and on the other side you have the input device operations (press top hat, press F key, push joy stick to the left, move the slider forward, etc.) In between is the mapping process (push joystick to the left = turn left, press F key = fire weapon, etc.).

    I'll look it up later. The key point is that the input devices themselves should be an abstract class, so that any device can be an input once it supports the interface. Java 3D has this concept in place, so that you can swap a joystick for a keyboard for a mouse for a thingamajig yet to be designed.

    This also lets you design the AI to use the same control operations as you; the only difference is the calling method.

    ------------------
    bobo
    <*>
    B5:ITF
  • BigglesBiggles <font color=#AAFFAA>The Man Without a Face</font>
    Yes, this is an important part of the modular design. The AI interface needs to look like a control interface. Then you can just swap them over and suddenly the AI will be flying your ship or whatever.

    ------------------
    [url="http://www.minbari.co.uk/log12.2263/"]Never eat anything bigger than your own head.[/url]
    "Nonono...Is not [i]Great[/i] Machine. Is...[i]Not[/i]-so-Great Machine. It make good snow cone though." - Zathras
Sign In or Register to comment.