Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Spellrazor

Spellrazor - a haunted arcade game from 1981 · By Fluttermind

Did score values change?

A topic by a_mandible created Mar 02, 2016 Views: 889 Replies: 9
Viewing posts 1 to 8
(1 edit)

Since the 2/29 update, I'm getting farther in the game but scoring lower. Has my playstyle changed, or did enemies become worth fewer points or something?

Developer

Hmm. Scoring should be identical. I didn't alter enemy or multiplier values at all. I'll look into it tomorrow.

Okay, more data. I'm playing on a Mac.

Playing version 0.9.12 "Spleen", the very first enemy I kill in the game is worth like 200 points, and the multiplier goes up by 0.2, then 0.2, then 0.4, then 0.4, then 0.6... I finish level 1 with about 3000 points.

Playing version 0.9.14 "Brains", the first enemy is about 100 points and the multiplier only goes up by 0.1 each time no matter what. I finish Level 1 with about 1000 points.

I seem to have missed 0.9.13, so the change could have been there too.

Developer (1 edit)

Ugh. I know what this is. You, sir, are a genius for spotting this.

There has been a bug for months. When a creature died, it died at least twice...

...but the reward code was broken, so creatures never dropped the correct rewards - just a single one. Twice.

I fixed it.

This broke the multiplier code. I will fix. Thanks so much!

[Edit: (sigh) fixed!]

I'm always curious about the lack of TDD in game dev. Several of the errors discussed in the forums sound like exactly the kind that TDD would make it very hard to make. It's not my business to tell other people how to code, no doubt they operate under constraints I'm not aware of. I'm just curious to hear other people's thoughts, is all. I often hear that people think automated testing of games is harder than of other software. But whenever I try to understand why people think that, the reasons they give are just a subset of the same reasons that testing-done-naively can be hard in any domain. Please educate me if you think otherwise, I'm just trying to discuss, not criticise. :-)

Developer

In my case it's simply because I'm writing this entirely by my self, and I'm trying to get as much stuff in/fix things as quickly as possible which inevitably leads to errors. In addition, in my many years as a professional dev, I've never seen anyone work to a TDD - they were all the rage in 1998, and swiftly abandoned by EA and Microsoft thereafter. This is partially because - unlike most other software - the actual design of game systems changes rapidly. This is exacerbated as teams get smaller, resulting in a large line-of-code-per-coder ratio; basically, the fewer people you have, the more impact any one person tends to have (by necessity). Since there's less need for communication between a team of, say, 'one' many things are simply held in the mind of the coder, and you'd easily double the length of development writing and re-writing TDDs, and the 'try it and see' prototyping methodology would fail as speed gave way to doubt.

Large projects with very few inputs and outputs in a non-realtime system (say payroll, where I worked as a coder, briefly, at IBM) the expected outputs on any inputs are 100% known. Writing software to spot deltas between expectation and actual results is relatively trivial. Games are a lot more complex, with a lot of chaotic interaction, and it would often take longer to write test cases than it would to 'try the feature out' and get errors.

In the particular case you mention, I *could* have isolated inputs and outputs, and written a test script to ensure this was occurring. But that only makes sense after the fact, once the problem is perceived. For this to be effective as a general strategy, I'd have to write code covering every input and output in the game, under every random level generation, with every variation of key-press pattern. This is simply not tenable, and almost certainly the reason most AAA teams do nothing of the kind outside of a couple of locked down, easy to test areas. At most, they'll write a 'bot' to complete the game and ensure they've not completely ruined it with the latest fix.

This is the case for most AAA games, and why test-teams are the usual method of weeding out this kind of thing. Since Spellrazor is a free game, creating additional costs for testing would be inadvisable at this stage. As a result, I'm using you kind folks as guinea pigs.

I hope you don't mind too much. :-)

Ha! I absolutely don't mind - I'm loving it to death. My only misgiving is that I don't get much time to play at this phase in my life. But I got in an hour tonight, which makes me happy.

I really appreciate the detailed and thoughtful response. My personal experiences don't jive with that, but that's what keeps the world such a wonderful place, so I'm not going to say you're wrong.

I agree that some fields of software must have requirements that are easier to pin down and keep static than games do. But I also think "failure to figure out and pin down requirements" is commonly recognised to be the single biggest problem in software of many kinds. I suspect (but have no hard evidence) that games do not suffer especially badly in this regard. All complex projects have terrible difficulty establishing just what it is they ought to deliver. Requirements get added and changed in major ways at the last minute the whole way through, up to and after delivery - to the extent that this causes a large proportion of software projects to fail and make huge financial losses, or be abandoned. My current project, we just did eight weeks of stuff for an external client, iterative delivery, constant feedback, and then after completion last week, we find out that there have apparently been communication problems, because what we delivered is not useful to them at all. Happens all the time, at all scales.

I also haven't found in my world that doing TDD, or even just testing, makes things slower - the reason I do it is because it makes things faster overall. Again, this is difficult to prove either way without proper studies (of which there are none - the few out there are terrible, imho.) The exception to that is if the team is inexperienced at testing, or when trying to retrofit tests in the middle of a project that was written without them, or has low code quality. Then I agree it really does slow you down.

I think if I were to list the benefits of TDD, in order of importance, then the top entry would be that it enables rapid and ruthless refactoring, followed perhaps by the correctness verification that the tests bring. So in my world, a project which experiences a lot of churn - requirements constantly changing, designs being ripped out and replaced - is actually the very best place to use TDD. The payoff is actually lower on a project with clearly understood and static requirements.

The argument you use about covering every possible permutation of state and input being untenable is one I've heard a lot, in all fields of software, but I think it's mistaken, because I've seen counter-examples in which it has been done, to great effect. Those were actually the highest-functioning teams I've ever worked on, and I think the causality is in the direction of "tests made the team high-functioning", not the other way around. There are techniques to slice large phase spaces down into manageable parts.

I'm a coder, but not a game coder. I work in military radars, geospatial stuff, some startups. I've dabbled in weekend or week-long hobbyist game projects, which I enjoy very much.

I'm currently on a one-man crusade to demonstrate Python as a viable gamedev language. I'm happy that even with my Sunday afternoon dabbling, I can get hundreds of independently positioned/oriented meshes rendering at 60fps even on an old, modest laptop with Intel gfx, with some behaviours in place, so that's plenty for my purposes. :-)

Developer

Python is lovely, and Eve Online proves your point already. I'd be using it if I thought it was portable enough to target all my (potential) platforms

As for your experiences with TDDs... would you do them for yourself, with your Python code? If you think it scales to such small teams (1!), do you have suggestions for how to get into the right mindframe? Any resources you'd suggest reading?

As I understand it, Eve uses a core of C++ for the rendering, and Python for 'once or less per frame' stuff, like AI. So Python might comprise most of the codebase by linecount, but it's not in the intensive innermost loop. Of course, my needs are very modest, so I'm confident that I can go the whole hog in Python. But I think that will go a lot farther than people generally believe. And one day, when the JIT lands in core Python, suddenly it'll be a whole new ballgame.

Yep, I TDD all my personal projects if I think I'm going to be changing or maintaining it for a while. But if I'm in a game jam with a team of people, you can't insist on that - folks aren't into it. And things that are throwaway I'm less strict about.

As for resources, well, it's distinctly outside of the 'games' field, but IMHO the single best book on testing is "Test Driven Development with Python", aka "The Goat Book" http://www.obeythetestinggoat.com/. It shows how to create a Python web project using the Django framework, but in a TDD style. Disclaimer, it was written by a former colleague and friend of mine, and I was a technical reviewer. But although I'm biased, I'm really seriously impressed by what he achieved. He really dived into the topic, producing both more depth and more breadth than I expected, seriously tackling thorny real-world problems head on. He produced something that I honestly think is a classic software book, and it's very lively and readable, without dumbing down.

Oh, and there's a free version of that book on the page.