Three months later, and I’m back.

Tactics is relevant again, I’ve been resorting code, making things less ugly, and adding functionality to the engine for almost a week now, which means I’m back!

First of all, what have I done up until a week ago?

I was working on graphics, getting polygons to rotate properly, and did it.
I released a working version of this, with a square, movement and rotation. (I was pretty proud of myself: I don’t care what anyone else thinks)Colors, scaling (i.e. zoom) and non-oblong shapes are all completely within the scope of the code I have made, but the point of the release was to test the one feature I had doubts about, not to show off the features I’m comfortable with.

Then I moved house and forgot about it until now. With school exams finished for the semester, I started programming last week, and I feel comfortable saying that I am back!

Since graphics are working, I’m changing the layout of the source, and creating a “game instance” class that will help separate the graphics, engine, and game from eachother, and then it will be work on actual game mechanics like entities and movement!

Exciting times.


Different Control Schemes in Tactics

A design I have long had in mind is for three different control / planning systems working together in the same game environment.


Normal tactical input, in which you construct a game plan, and submit it to then see the consequences of your plan; After seeing the consequences you reevaluate and repeat the process, stepping forward until you reach your goal or screw up and get shot.

The key element to avatars is their reaction time, which would control how the characters react to events, and how you think about your plan as you make it. When you submit a plan you would only see up to a point in the resulting consequences, for example:

Unit with 3s reaction time:

  • Peak around corner over 2 seconds:
    • move out for 0.5,
    • look for 1.0 and
    • move back for 0.5
  • stand behind cover for three seconds

The five second plan would be submitted, and you would be given whatever they saw for the first (five minus three) two seconds of the plan, the remaining (in this case idle) three seconds of the plan would be locked, as a trade off for the fact that you can see what happened in the first two.

This of course represents the unit spending three seconds to think about what they just saw, and evaluate the future plan they will take.

Being a very central system in the game, these will be explained in a lot more detail in the future.


Not sure why I chose this name for them; A terminal is a giant every-possibility-considered plan that branches depending on certain events and is generally not very much fun to develop plans in.

For the most part they are exactly identical to avatars, and depending on the level design may just act as a script that saves simple tasks you have done before in case a future opponent does exactly the same thing in the early phases of your next match.

They also could serve in a terminal V terminal game in which you need to develop the best plan that will push your opponent to a check mate position where they will have to reevaluate an earlier branch in their noded plan. (Whether or not either of the previous two paragraphs made sense, you don’t actually need to know what I mean)

I have no idea at the moment with respect to making terminal design more dynamic, though this would be extremely important as simple adjustments in a unit’s placement would otherwise require an entirely duplicated branch. Overlapping with ideas presented in the final control type (scripts) will probably prove useful.


Programmed units that aren’t controlled at all during the progress of the game. My plan for making them is to create simple scripting capabilities, and then wait for someone to learn how to use them and thus work out what their limiting feature is; when they give me feedback I would be able to significantly improve the potential of the script designing system. (So follow this blog! Leave comments! I’ll be fueled almost entirely by fanbase in not too much time at all!)

Scripts would allocate nametags to entities and go through processes to determine what they ought to try doing. (that’s all I’ve got at the moment…)

Script design would allow for some very powerful features to be added to terminal and avatar interface as well… This game could be good.

Integrated Scripts

Scripts would have the option to utilize information on the map laid out by the map-maker, to maximize the success of NPC enemies basically.

Scripts would be made to use place tags, and then a map maker (perhaps the same person who made the script?) would place these place tags on their map to create an I.S. functionality.
In the future I’ll explain the way that these three interact with eachother, and I shall also explain how the latter two systems have very strong uses in NPC construction.

Building a 2D World from Polygons

In my tactical project, I am building my world from polygons:

There are however a variety of polygon types to serve the functions of the world:


A floating platform shape, suspended in 3D space.

They have a collisionable floor and are given a specific designated height for wall structures to occupy.

By constructing a prism in the virtual space, each platform can be assigned a “field region”, the field region of two platforms can never overlap, but can touch on their extremity.

A floor thickness may be possible, but probably not until later in the project, especially since platforms won’t support 3D suspension completely anyway… Not for a long time…


This marks a part of a platform that isn’t occupied by floor; this way creating a hole in the floor isn’t the excruciating process of drawing a tightly folded C-shape. (which would no doubt result in collision errors if things fell smack-on the seam) This way the holes can also be more manipulable as well. (In both level editing, and in meta blocks.)


Block is another word for wall. A block is a polygon that is set inside of a platform polygon; the polygon represents a wall with the height associated with the platform.

Blocks are, of course, collisionable.

Properties such as visibility (glass) and penetrability (plaster/otherwise thin walls) are so far unconsidered; they would likely take on a specialized polygon based off of the Block.


Meta-blocks are also set within a polygon. The specifics can be explained in a future post, but a meta is an alternate map for the platform, alongside programmable property systems that would allow you to control when, and where, the alternate mapping would replace the current one.

I don’t know if that makes sense, but the system I have in my head allows for the creation of destructible terrain, interactive elements such as oil slicks, and special map triggers, for all of your map-making needs.


The just-now named “effectygons” will be what manages certain live gameplay aspects of the game.

Effectygons will always hold the kinds of property that meta-blocks use as input, and will also have the possibility to interact quite heavily with meta-blocks. (as well as other effectygons. So then an oil slick would probably work better if it were just an effect!)

The flame of a flame-thrower, the smoke or explosion of a grenade, and specialties such as liquids and chemical coverings are all possibilities of this system.

The exact extent to which they will be programmable is currently unknown; the only thing I really understand is how properties and timers will work, and how meta-blocks can be used to extend their effects in map-specific ways. Things like fluid dynamic, or projectiles parting smoke would be fun and ideal, but not confirmed… I don’t know yet how it might work.


Moving and functional blocks that have a height independant of their platform could be useful for map design.

Tables, working doors and miscellaneous objects and blocks would be ideally constructed in this manner… the possibility of vaulting low objects is an immensely exciting idea for a tactical shooter design as well.

The problem is that having non-rigid objects would create complications in that certain situations may desire explosions, and physics, the latter of which I am undoubtedly incapable of delivering; even if I could it would be very displeasing to have objects that behave with physics but cannot pitch or roll due to their 2D constrictions… it would be even more displeasing if they actually did pitch and roll in what is otherwise a beautiful and elegant design scheme.

As indicated by the question mark, I am not certain enough of this to really give much speculation or implication about how it would work; my first priority is working that out on a semi-technical level, to get an idea of complexity and direction.

Time Stamp

These are my thoughts and ideas so far!

They have been a few months in the making, so hopefully they are going to be good.

I’ll post some follow-up as more ideas and thoughts come… but first I am going to be posting about the technical side of collision detection in a tactical game environment such as the one I am planning and making.


The tactical project has been long in planning, and as is often the case with me, it is the final design scheme coming out of several similar or connected design goals.

The newest element of this complete plan is the use of polygons for all terrain design.

The idea was for the concept of having a mass horde fighting game which was meant to feel like L4D in many ways, and the polygons would allow for terrain that would feel nice and dynamic in a way that rectangles or tiles can’t achieve.
A point of note here is that my main concept vision was a pair of cliff faces with a path between, a horde of circles rushing in one end and a foursome of circles with guns standing on the opposite end. In a tactical environment that could be very fun to watch and manipulate.

The advantages with polygons I have in mind are:

calculations are easy – Detecting if two lines are touching isn’t very difficult, and relevant ray-casting calculations are well within my reach.
works well with top down – Complex object designs aren’t very friendly with a top down 2D view. I’d like to go directly top down and polygons are my friend here.
no losses – Tile designs and orthogonal designs can be made with polygons anyway.
rendering is easy – There are functions available within the windows api that allow for polygon rendering; super easy.

Using block color polygons to build a world populated with block circle entities and projectiles looks very nice in my head; although special visual effects aren’t within my reach, I can see things working quite well, and being quite low demand to build and operate.

The first post: Details and thoughts at this time.

“My affair with C++” is going to be a project blog about my two standing C++ projects, as well as whatever C++ related learning events happen along the way.

Project number one:

Tactics engine; top down block colour tactical engine with one gamemode in mind. I call it an engine because I can imagine how versatile tactical game environments are by playing them.

Zombie Text:

A kind of sketchy zombie survival text thingo that might be okay if I make it.
The idea comes from the fact that all of the zombie PBBG’s I have ever seen, either are crap, are non functional at this time, or I can’t remember what they are called and where I can find them. (Oh there is this one semi-decent one but it is all freemium and junk… nice inspiration from it though!)

Yeah so I’ll be working on them, talking about C++ stuff as it comes by, and thinking about where I’d like to take both projects (if only as an experiment for hypothetical game design)

Let me know if you are interested in contributing thoughts and ideas, or playing around with my early versions as they start to actually pump out!

Thanks, Spivee.