Event Based Sound System – Project Blog #3

Last time we end our Project Blog #2, we left a question unanswered.

That is, since the limitations of Tango apply to our project, we cannot do our gameplay in a dim space, to act as substitute, we made the game itself super dim instead. However, the dim environment of the game world means that the player is very likely to miss something in the game world. As our core gameplay mechanics involves finding ghosts floating around in the room, and taking photographs of them while looking for key items, thus requiring knowledge of the game world.

Yay for conflict between gameplay and mechanics!

Of course, we need something to alert the player without letting them see.

Even though Google Tango is one awesome system, it cannot provide anything than visual and audio input. Since the visual input are out of the question, this left only with the audio input. Meaning that we can only tackle this problem soundly (pun intended).

So, we decided that for each event happening, in other words, when there are something that requires attention of the player, there should be a sound based on it.

Talking about sound, there are background sounds (BGM) and foreground sounds (SE). And that’s how they’ll work:

  • BGM
    • A background track, consisted of drums, bass and some percussion constantly plays in the background as the game’s BGM. The track is composed in typical slow beat drum & bass fashion so the player won’t getting bored listening to something without a melody.
    • However, when an event is fired by the system – A ghost appears or a key item appears – The track’s melody part, consisted of flute, piano and guitar would kick in, since this also changes the genre of the background music, it’s considered as a sudden and apparent change in atmosphere. Because of the unexpected change, the player is then realized something has happened.
    • This is achieved by recording the two tracks in different files, and while the files are both playing, the melody track’s volume is at 0 unless something happens which would increase the volume. When the event ends, the volume is reset to 0.
  • SE
    • A series of SE is prepared and is read by the engine as sound files.
    • Each SE is tied to a location within the game world.
    • When something happens, the SE that’s closest to the event would play. Its volume is calculated from the player’s position to the sound source.
  • Aside from BGM and SE changes, when an event occurs, there would be something displayed on the HUD to further reminding the player that actions need to be taken.

As of now, the programming of those functions are coming nicely. I had to say, Tango would not be a that good choice on horror games, due to its various limitations. It is more fitting as something that is exactly the same in real life and in the game world. For example. an library or a museum, where the Tango can be used to provide additional information based on the user’s location (e.g facing a specific piece of art, displaying on Tango the detailed information regarding said art). But regardless, our game is still coming nicely and I think we are doing the best we can on this.

So, until next time.

What they see, what they get

Recently, Me and my programming circle of friends asked our artist to redone the UI of one of our long-running game projects. What project, you may ask? Why, Battle Royale, of course! (Known as Hunger Games in English Versions).

While the changes detailed in this article won’t be pushed to the English version yet, I found our train of thoughts behind the UI changes interesting so let’s talk about it here.

So, what got changed?

(Both screenshots are from the Chinese Version of the game, but the exact changes are unrelated to languages.)

2016110650

Old UI

2016110706

New UI

Aside from the obviously rehashed avatar artwork, the biggest change is the font used to label the player’s status, together with the changes of the visible HP/SP gauge. Let’s talk about them one by one.

Font and color Issues

The old UI used a font that’s remain unchanged until just now. Originally when we’re programming the interface, we’re thinking of something really threatening. Since we want to direct user’s attention to their status. Especially the part where player’s current health status is shown. (Fine in Green text, Caution in Yellow text, and Danger in Red text) Then if the player is also inflicted with a special debuff, the corresponding debuff will light up in different colors.

However, recently some of the player just felt that something is not right, but cannot pinpoint where. As what I said before in this blog, you cannot let your player spell out the problem for you. So we performed some testing and find out the colored text sometimes dazzle the players for some reason (I mean, at least I don’t think anything got wrong…)

Well, it’s true that indeed the contrast of fully colored text on a black background seems to be unnatural, and after looking into some design principles regarding colors, we eventually choose white on black since that’s the most attention grabbing combination without the unnatural contrast.

However, we still need the Green = Fine, Yellow = Caution and Danger = Red color coding to preserve the user’s habits. After years of playing, I doubt any long time player would see the “big red text” without any action. That’s why we added the appropriate glow on the white text. Due to the text is now glowing instead of being pure colored, the original sans-serif font proves to not showing the glow enough, forcing us to change the font into a serif font.

Let’s put it this way – This is easy on the eyes.

Fill Vs Outline

After dealing with the color and the font of the status texts, the original HP and SP gauge now seems out of place due to lacking a glow and using pure, brighter colors.

Of course, our first thought regarding that is to just change the color of the fill to be deeper, matching the hue of the new glow.

Then, we found out that against the black background, the changed gauges now is hard to see due to their colors being too dark. Time to change that as well!

If filling something with dark colors won’t work, then let’s just create some highlights. However, to the old players, suddenly changing the human shape to always filled human shape only with highlights marking their current HP and SP provides to be confusing – The players has get used to the changing fillings, and now the fillings won’t change which is met by some negative feedback.

Facing with that, we reversed our line of thinking a bit. What if we don’t include the fills in the first place, and just used the outline as the gauge, and highlight the parts of gauge to act as the indicator?

To the players, the highlight acts like a filling of colors due to we made the outline much bolder compared to the early design.

Well, that’s the story behind the game’s recent UI change. Originally we only wanted to change the color of the text, but that eventually leads to more changes that resulted in a much different UI, as long as our players like it, it’s good for the game.

Until next time.

Okay then, let’s talk about my principle.

(Originally posted on March 9, after Lee’s second loss to AlphaGo, additional information and further analysis is added on March 10.)

(Disclaimer: I know more about AI than I know Go the game. My knowledge of the game merely came from some crash course betweens the rounds, if I made any error, let me know)

So, it seems my own principle, the sentence that I put on every other website of mine, is at stake.

“Human has unlimited potential.”

Now. if human has unlimited potential, how can it be defeated by an AI, especially on the most advanced game with countless possibilities?

In my opinion, AlphaGo’s victory today actually verified my theory: We humans indeed have unlimited potential.

To understand that, we must first need to know why AlphaGo can beat Lee in the game, and why AlphaGo is different from any other AI so far.

In our AI courses, we actually learnt everything we need to know regarding AlphaGo: Decision Trees and how to cut them, Machine learning and how it’s executed. With observation of the prior 2 matches, we can easily see how it’s different than many other AIs, and how some people’s expectation about AlphaGo is pure wrong.

What did we get wrong?

Firstly, AlphaGo does not follow any predefined patterns, in other words, it laughs at us human’s tendency of utilizing our experience in our decisions. Some people are stating one may be able to throw AlphaGo off by utilizing “an unusual strategy” and “something it never seen”. Lee is also thinking about it in the first game, so he used a very unusual starting hand. However, due to how decision tree cutting works, the strategy of “using something that threw AlphaGo in a loop” is never an option.

In humans’ eyes, a step on the go board follows one of the two. It either is a move that may change the course of the game, earning the player some advantage over their opponent. Or it’s a useless move that cannot generate any, or generate less advantage, and eventually would be caught by their opponent in a counter move. In other words, any move that’s not the “best” move is undesired.

So, some people thought of AlphaGo as calculating all moves to see if it follows a predefined pattern of moves, and make their move accordingly.

Truth is. there are no such patterns, there are only Trees.

In our eyes and experience, there exists a certain number of moves that may ensure advantage over our opponent. However, that’s not the case with AlphaGo. It is true AlphaGo’s network is trained using thousands of prior Go matches, however it’s used to develop its own policy network. The AI don’t care about humanity’s learning through experience.

When a move is made. AlphaGo forms and cuts the decision trees using the Monte Carlo Tree method, each evaluation happens in real time and is good only for that move, when a new move is made, a new tree is formed and analyzed, the loop goes on until any of the player loses.

There are no predefined anything, anything and everything is discarded.

In other words, you cannot throw AlphaGo off with something “that does not follow a pattern.” Because it does not have a pattern to begin with.

Secondly, AlphaGo does not want to win the most, it only wants to win. In the first matches, experienced player and professional players both points out that AlphaGo seems to make moves that didn’t “make sense”, sometimes the move will generate less advantage than others, sometimes the move has no apparent reasonings behind. Then their opinions are “AlphaGo seems unable to grasp the big picture”. Then it seems AlphaGo would always turn the table around and win with a very small advantage.

But is it?

According to Google’s Research papers, AlphaGo don’t really act like a traditional AI. In our courses about AI regarding decision trees, we mostly want the AI to follow the tree that has a better earning, in other words, we want the AIs to take a move that has a better advantage. But such is not the case in AlphaGo. AlphaGo seems to only take the decision that will make them more possible to win. Let’s take an example.

You’re an above-average student, taking an exam.

The exam has a total score of 100.

You discovered a week later that you got 80/100 on the exam. Impressive! You’re above average.

However, Mr. Jack beside you got 100/100 on the exam! That means to him, the exam is too easy to an extent that he can get full marks.

But, in another exam, where the difficulty is way harder, to the point that you can only get 40/100, Jack can only get 60/100. There is this Mary that got 80/100 on that exam, and since she has overall higher GPA, she got the scholarship while You and Jack are respectively above-average students and good students.

Simply put:

Average students get 80/100 because they can only get 80.

Good students get 100/100 because that’s the most they can get.

Best students don’t need to get 100/100 on every single exam, they only need to ensure they are better than good students.

AlphaGo firmly sits in the best category.

They win only by a small advantage, because they only needs to win, it doesn’t care about some advantage.

Come to think of it, in AlphaGo’s prior matches against the Europe Champion, Fan Hui, all matches also ends with AlphaGo having a small advantage over the Europe Champion. We could say that the AlphaGo in last October is not up to par to the AlphaGo today, but is it really the case here? Does AlphaGo win just because it could, it’s actually much more better than us, but it only wins with a small advantage just because it could?

Nobody knows.

We only know that this time, Lee has lost all 2 games, and while he surrendered in the 1st game, he struggles in the 2nd game’s last few moves, trying to find a way to turn the table around due to the AI only holds a small advantage, but couldn’t.

Lee has proved “out of pattern” tactics won’t work, and “taking advantages” won’t work either.

He still has 3 rounds to try to win, but I’m holding little faith in him.

Why AlphaGo would get 5:0 over humans, can we fight back?

If there’s a chance that Lee could win one round, it’ll be on Friday Night/Saturday Dawn, in the 3rd round.

After proven two of the most appearent tactics not working on AlphaGo, Lee’s selections of human-exclusive tactics become limited.

Can he exceed human’s limits and found out how to beat this AI, or he’s still proving that AlphaGo is, in fact better than us, only the Go board would tell.

But, I’ll put this here:

Humans has limitations in the form of physical limitations.

But Human has unlimited potential, and that’s why humans made AlphaGo, to go over our own limitations.

In other words, the mere existence of AlphaGo, proves that we Humans have unlimited potential.

 

…Until next time.

Clippy, a failure?

Most users of Microsoft Office would probably think Clippy, or the other Office Assistants are unneeded, which is why it and its friends are absent from Office 2003 onwards – Well, they are kind of hidden and need to reinstalled in Office 2003, and is totally gone in the later version. But is it really caused by users not needing the service they provide? I don’t think so.

Well, let’s see some of the similar assistants and we’ll go back to my statement.

What if we give Clippy more functions?

Around the time Clippy is hot as an not-welcomed Office Assistant, over the seas in Asia, a certain program called Ukagaka came out. The word is from Japanese where the open-source program originated, meaning “Something to be fed”. Another common term of this kind of program is Nanika, literally meaning “Something”. It look like this:

Yes, that’s sans from Undertale, a 2015 game. Outlived Clippy, heh.

Roleplay, Assistant, or random stuff, you name it.

A Ukagaka is consisted of 3 parts: A Shell, that’s what you see – a virtual character on the screen. A Ghost, which decides the response and AI personality of the Ukagaka in question. Finally we have A Shiori (means Mark, as in bookmark), which is pure script programming to utilize the Ghost and link the Ghost with the Shell, together with all possible functions the author want to program. Finally, the 3 parts are processed by the Engine named Ukagaka to present itself onto the user’s screen, providing various functions.

Since any author could program the script in their own ways, and the script program itself being very extendable, those desktop assistants can do everything, from chatting with you, randomly start some topics or display some trivia, to useful functions like a calculator, a dictionary, a search engine, etc. It can also track user’s actions and respond to them accordingly. (For example, close itself with an animation and a “goodbye” when it has detected user’s shutdown computer command)

Guess what, people like it a lot! They like it so much that new Ghosts and Shells are being produced everyday by everyone. Take the screenshot character for example, he’s from the 2015 game Undertale. While the Ukagaka is a thing from almost 10 year ago.

Why? Because it’s useful? Well, let’s see something else then.

Also desktop assistants, but does nothing aside from being cute

Around the same time, or even earlier than Ukagaka, there is another kind of desktop assistant, or better put, “pet” called shimeji. The name is also Japanese and roughly translate to “placeholder”. They’re little animated sprites that hang around your windows and desktop, doing nothing, taking over your computer memory and just being cute.

Yeah, another Undertale inspired one, just to show it’s still popular now.

These little desktop pets don’t have any function, and is just coded to stick around based on the position of your windows and current actions. These sprites will do their proper animation when interacting with the desktop, for example, if you drag one off one of your windows, it will fall down onto your taskbar (or the bottom of the screen, if your taskbar is hidden), or when you’re writing something, it will look up or down as if looking at what you wrote. But only that, nothing more – these little guys don’t provide any function at all, and are just being there, hence the name “Placeholder”.

Weirdly, those are also highly popular, despite having a history of more than 10 years. Proving that yes, users could also like pets that hang around the screen for no reason and no function at all.

Then what happened with Clippy and Co.?

Let’s see what Clippy does.

When you’re writing something, it jumps out and ask if you want to write a letter, even though you’re probably not writing a letter.

When you get a word wrong, it jumps out and ask if you need more spellcheck options.

When you’re doing an action in Office, it jumps out and ask if you need any sort of help, no matter if it’s the first time or not.

The main problem surfaces – It jumps out on its own.

Neither of the above two assistants did that.

If a user wants help, they can always click the assistant. Or if a user just want a companion, they’ll also not wanting said companion jump out every two minutes offering unneeded help.

That’s the real reason why Office Assistant flopped. It holds a complete disregard to user needs, and would bound to disappear. Nowadays, the only trace of clippy is hidden as an easter egg in an optional setting, which is sad.

For me, i actually like them as a silent companion – Just turn all help off and let it watch me write.

Actually, it’s not Microsoft’s first time mess those assistant up, their first failure is in Microsoft BOB, but that’s another can of worms that I don’t really want to talk about here (Yes I used it, no it’s a huge failure).

So, until next time.

Trend of VR & AR, and obstacles

Recently, it’s true that VR & AR are becoming the hot topic. While there are already Youtubers showcasing the wonders of Oculus VR, and with Steam pushing their own VR hardware with HTC (Revealed very recently several months ago), the future of playing games seems to be very interesting.

VR and AR, where are we now?

The key difference between VR (Virtual Reality) & AR (Augmented Reality) is that while VR put the player in a virtual world using special devices (replace what they see and what they hear, and in HTC Vive’s case, also replace what they use with their special controllers), AR projects the virtual world directly in the real world. Currently, apart from the HTC Vive, which haven’t been released, VR devices still mainly uses traditional controls (isn’t it weird that you still need to use keyboard or game controller when you’re in a virtual world?) while AR’s limited usage is focused on displaying various information or bring virtual characters to life.

AR technology used in the 3DS Rhythm Game Hatsune Miku Project Mirai DX.

The real effect. The game supports taking a photo in this mode. However only through the 3DS screen can the model project onto real life.

As one can see in the above screenshot. AR nowadays is as limited as VR currently is, the key to AR is that there needs to be some sort of vessel to project the virtual image onto the real plane. While Nintendo 3DS requires special AR Cards, it’s already proven that just printing the Card content on any surface would work due to 3DS displaying the models based on the content of the card. Similar AR technology such like Google Glass instead project the virtual data onto the wearable displays. In my opinion, AR technology has fewer problems to tackle due to Google Glass already did the job of enhancing what we see (and hear, if paired with an earphone). The only other problem is to iron out the cost of the system, and the convenience of said hardware (wearable hardware is definitely a right step). In the current form, Google Glass is still too expensive for everyday use, and its function is still limited, but it’s Google we’re talking about, so that would change in short time.

Current Problem with VR

While AR is more or less on the right track, VR on the other hand are having more problems.

The foremost problem is about the cost. In order to experience VR, suitable hardware need to be purchased. A key difference between AR and VR is that while AR only need one single device to use (a Nintendo 3DS is less than $200, think about it), in order to utilize VR, an entire solution, consisting of the VR devices, and a computer capable to run the VR apps need to be both prepared.

According to Steam’s recent statistics, there are only 5% of computers able to run a VR app smoothly. This is one key obstacle, if only 5% of the current users are able to utilize VR, then how many money does the rest 95% need to spend to upgrade their computer on top of the steep cost of the VR device itself? While it’s entire possible for Steam and HTC’s Vive, one could somehow got it to work with a Steam Machine. But Steam Machines are not cheap either: High-Ended devices could easily cost more than $1000, which is never a small cost.

Compared to the steep entry fee, actual programming of the VR apps would be the least of the concerns. However do note that to program VR apps, a system that is able to run VR together with that device is still needed, that still put the cost of producing VR apps much higher than intended.

So, my opinion on this trend: While both VR and AR are world-changing technology, one may still wait some time to fully enjoy benefits of them.

Until next time.