Not sure if I ever mentioned that I meet weekly with my faculty adviser on the progress of this project, but yes I do. Dr. Carbone has showed interest in this project, and has told me that he would like to see it published as research to CHI Play one day. He also enlightened me last week about how research is suppose to be conducted and how I was jumping the gun a little bit with the mindset that I had previously. My mindset last week was that I was going to start capturing test data to find conclusions dealing with emotions and game situations. But I realized now that I'm not quite ready for that stage yet because I am in what he called the 'pilot stage'. This means that I really need to learn the limitations of the system, and find what methods actually work best in conducting future tests. This is all to build up to an Institutional Review Board (IRB). When that is obtained, THEN I can start to do actual research on the biometric feedback part of it. I already felt like my test methods felt sloppy and nightmare to run from last week. So what did I do about this and what are my next steps forward?
After doing some limitation tests with Camtasia, my biometric system, and a video game running together, I started to think about how I could make this process better in capturing everything. Because last week, I essentially had to activate each system manually separately with weird timestamps and most everything running on one laptop. I started looking into HDMI splitters vs HDMI switches. Heck, I even started to wonder about things like Beowulf Clusters which seemed so intense to set up. Then this led to thinking about Twitch streamers. Even with the most common streaming resolution at 720x480, how are they able to play a demanding game, run an audio suite, and stream at the same time so smoothly?
As you can see, this is actually using two PCs and this is what the top streamers are doing! One PC to solely focus on running the game and sending encoding data to the second PC that is primarily focus on streaming. For this to be possible though, you need a capture card because normal graphic cards only have output video ports as opposed to input. Here is another diagram showcasing the use of a HDMI switch to switch between multiple HDMI inputs.
At home, I have my laptop and home desktop but neither have a capture card. Even with a capture card though, I still needed to figure out the issue with getting everything to record at the same time or incredibly close to the same time. In my research, I found that streamers often use OBS Studio that is actually installed on both computers that is taking care of some of the communication on when streams start. So this got me thinking, if I can somehow get OBS integrated with my biometric system then this would open up so many possibilities! I could set up multiple webcams, screen capture, and even stream with OBS on one PC then run the game on another. Another idea is to run the biometric and OBS on one machine, then just set up multiple webcams that surround the player playing the game. Plus OBS allows users to upload their own python or lua scripts for other cool features as well!
I got so stoked on the idea that I went ahead and started to integrate OBS with my biometric system in Unity. For this to work, I knew that I would need to write scripts for Unity that would send command line arguments to start the OBS Studio and start recording through code. Someone had the same idea last year apparently, because there was a plugin for Unity made in 2018 that was suppose to function this way. I got the plugin and started going through the documentation to put things in their correct places. I also had to do some configuration in both OBS and Unity along with adding some code to my system to communicate with their library. Unfortunately, after doing what I believe to be the correct steps, Unity would only open up OBS on activation of my biometric system but OBS wouldn't start recording automatically. I even reached out to the author of the tool, and he wasn't sure why it wasn't working either and would likely need to make a patch for it to work. After some research and messing around with scripts to upload to OBS, I ended up finally realizing an easier solution from referring once again to OBS Studio's documentation of launch parameters that I got from their Discord Community: https://obsproject.com/wiki/Launch-Parameters
With these parameters in mind, why I couldn't I just edit my launch properties tied to the shortcut of my OBS software? That is exactly what I did:
After adding the launch parameter '--startrecording' to the target field, this would send the command to OBS to start recording on startup. I was super stoked to see this! I was able to successfully run my biometric system, which made the call to launch OBS and then it automatically starts recording using the properties of my OBS scene. In an OBS scene, that is where you can specify how you want to capture, stream, and etc. Lots of options to choose from and with a capture card I could set up a dual PC system! For now, my OBS scene is only capturing audio for I would need multiple webcams to get video since my laptop webcam is occupied by my biometric system and I'm only on one laptop. Point is though, OBS Studio is now integrated with Carter's Biometric System and everything can capture at the same timestamp with only ONE button activation! No weird timestamps and trying to line up multiple different videos with audio or etc! This gives me so many possibilities with how I can conduct future tests!