Friday, June 7, 2013

Individual Contribution

At the end of this project, Geriambience has created a system that allows a user to interact with a virtual bathroom using the skeletal tracking functions of the Microsoft Kinect. The bathroom has been carefully modeled to replicate a real bathroom and uses Caroma Bathroom products, as they are the sponsor for this project. The Kinect tracks the user and represents them within the virtual bathroom, which demonstrates visually and in text form how the bathroom is reacting to the users actions.
While our group had a somewhat dis-cohesive start, as the project progressed we came together to develop a fully functioning product. Our final result not only represents the conceptual elements of the potential bathroom, but provides a basic framework on which a more fully featured and physically realized project could be constructed.
This blog post details my personal involvement in the project and an assessment of my development and contributions.


Contributions:


Once we split the group into two teams; the programming team (Steven and I) and the bathroom design team (Laura, Dan, and Siyan), we began looking at how we where going to approach this project. There where effectively two stages of development that I was involved in, the Kinect Movement System phase, and the Kinect Interactivity phase.

Kinect Movement System:
In the initial brief, we had planned to develop a system through which the Kinect could be physically moved to allow us to effectively track people through a bathroom space. After a few concepts, we decided to imitate the "Skycam" system, where a camera is mounted on four wires which are wound onto winches at each corner of the room, meaning the Kinect would be able to move anywhere in the room. We also included a rotating joint on the system so the Kinect could rotate to track a user in corners or other hard-to-reach places. After some preliminary designs, I came up with this plan:













This worked using two separate sections, the top plate, which held the four wires and the stepper motor, and the bottom plate, which held the Kinect sensor. These where connected by four bolts that ran in a groove on the bottom plate, allowing the stepper motor to spin the bottom plate.
We got to the prototyping stage and created the following prototype:
















Unfortunately, once we reached this stage we realized that to develop this further would take at least the rest of the semester, since the prototype had a number of flaws such as a lack of smoothness when turning, and the Arduino implementation to move the wires required large numbers of parts to be shipped.
We decided that instead of continuing to focus on this single element, we would begin work on the Kinect Interactivity and order a 2-Axis powered joint which we would hopefully be able to implement much quicker than this system. Unfortunately, that joint never arrived, but we discovered that, for testing purposes, a non-mobile Kinect gave us sufficient coverage of an area, provided we positioned it correctly.

Kinect Interactivity:
When we moved on to the Kinect Interactivity phase, we started by approaching Stephen Davey and getting a copy of his CryEngine 3 code, since he'd previously created some Kinect integration, and running through exactly what it did with him. Thankfully I had some previous experience with both the C++ programming language that the code was written in, and the Visual Studio Integrated Development Environment that was used to edit it, so Stephen was able to explain the code to me and Steven at a higher level than would otherwise be possible.
Once we had a basic understanding of how he'd tied the Kinect SDK to the CryEngine 3 code, we began building our own flowgraph node to suit our needs. Here is an image of the final node:




















We began by creating an output for the hip joint, so we could begin tracking the players rough location. Within CryEngine, we created a sphere that would constantly move to the vector provided by the Kinect, so we could visualize the players location. This revealed to us that the Kinect uses a different orientation to CryEngine, so depth from the Kinect was height in CryEngine. This can be seen in this video:

Thankfully this problem was easily fixed, since we just needed to switch the variables within the vector around. Here is a timelapsed screen-capture during the Kinect coding:



Once we had this working, we expanded upon it by creating outputs for each of the joints as vectors, so we could visualize the user within the environment. We used these vectors to position spheres at each of the joints, with the following results:

From here we looked back at the interactions we had initially set out to create in the back brief, to see how best we could implement them.

The following video is a timelapse of the interaction development:


 A lot of the interactions where based on a users hand or whole body being near an object, such as a sink or a shower, so we created a system that would react to the users hand being near a virtual object, in this case by re-sizing it:


There was also the toilet height adjustment, which tracked the users bone length and set the toilet to the appropriate height. We created a test implementation of this using a horizontal door to simulate a seat:


We then began looking at how you could control shower temperature. Stephen Davey had given us a basic understanding of how to create custom gesture recognition using the code, so we came up with a simple gesture that we could use to raise or lower a variable without accidentally activating it. It effectively involved the user holding their right arm at a horizontal angle, then raising or lowering their left arm to raise or lower the variable, in this case water temperature. We represented that variable in the initial tests by re-sizing a ball:


Once we where comfortable with creating interactions, we moved on to implementing them with the bathroom created by the other team.
The other team had usefully already created some basic flowgraph interactions, such as turning lights on and off and moving the toilet up and down.
The various interactions have been covered in the previous posts, but this video demonstrates all of them, with basic explanations of each:




Here is a time-lapse of some of the bathroom interaction being implemented:
There where a number of challenges when creating these interactions, especially when doing the context sensitive ones, like the toilet and sink movement. Flowgraph's real-time style of operations makes implementing classical programming methods, like loops and conditions, somewhat more complicated. Thankfully we both had some previous experience with flowgraph, so we where eventually able to come up with effective solutions.
An overview of the final flowgraph can be seen here:










The final interactivity was quite effective, and left me with a greater understanding of the challenges and possibilities of interfacing hardware and software. I feel there are a lot of opportunities for this system to be expanded and refined, especially if the Kinect could be replaced with a dedicated piece of hardware that could be mobilized to physically track the user and be less intrusive.



Individual Development:

During this project I learnt plenty of new things, some I expected and others came as something of a surprise. The main areas I developed in where:

Mechanical Design:
I spent the first few weeks designing the Kinect movement system. I've made plenty of 3d models for this course previously, some of which have even been physically realised, but I've never had to create something that has any features beyond the aesthetic. This made creating the movement system an interesting challenge. It took me a while to even figure out how to make it work, since we couldn't simply hang the heavy Kinect hardware off an inverted stepper motor, so I managed to design a system that would let the motor spin the Kinect without having to support it. In retrospect, I would've been better off focusing on a pre-built system, but after some quick initial research had nothing appropriate I decided it would be more effective to develop my own system. I designed it to use laser cut pieces, so it could be relatively easily assembled and simple to prototype. Unfortunately the hardware, using nuts, bolts, u-bolts and split-pins, was more expensive than I thought and more difficult to assemble.
Through this process I learnt the importance of careful measurement and planning ahead, as well as thoroughly exploring other options before dedicating yourself to a single choice.

Interaction Development:
Developing the Kinect interaction was something I'd never really done before. I'd done some Kinect development for the HyperSurface installation at last years Sydney Architecture Festival, but it was at a very abstract level and focused on representing the user, rather than how the user interacted with the wall.
For this project, I had to explore how people could interact with the Kinect system in an intuitive fashion. As the main interaction tester (Since Steven preferred not to be on camera) I had to think about not only whether the system felt natural to me, but how it would feel to an older person, people of different heights and whether the interaction would be easy to explain to someone.
The maths and logic controlling the light brightness.
I also had to decide, with Steven, what the best way to implement these interactions was. Mathematically, I learnt some interesting new concepts, like 3-dimensional trigonometry to check whether a users hands where in the correct positions relative to their body, and some more logic operations to control the flow of data through the flowgraph. This was probably the most interesting part of the project, trying to think about how natural human movement can be translated into a mathematically defined algorithm, while not being so specific that it becomes difficult to make the correct movement.

Intellectual Property:
Another area our group, as well as me personally, engaged with was Intellectual Property, for the group presentation in week 8. My particular area of research was into what intellectual property our project was creating and how we should protect it. Having never really looked into this subject before, I was very interested to learn about the variety of types of IP that existed and the different uses for each one. The importance of protecting my ideas and creations had never occurred to me, probably because I've never tried to commercialize them, but the research I did showed me how not protecting your ideas can mean you risk losing any potential benefit at all from your work.

Collaboration:
Collaboration is something I had some experience with previously on other group projects, especially the twenty-person group project for studio last semester. For this class, there where two levels of collaboration, with Steven Best, within the programming team, and with Laura, Dan, and Siyan in the Design team. Steven and I had worked on a number of projects together previously, though this project was a bit different, as it essentially required us to work on a single computer. This was due to the CryEngine programming being difficult to split between two computers, as well as the fact that we only had access to one Kinect. This meant we spent a few days each week coming onto campus and working together on the project. This proved to be a very effective way of collaborating, since whenever one person had a problem, the other person could provide assistance. It was also useful for this project in particular, since we where constantly testing with the Kinect. This would be quite difficult by yourself, as you'd constantly need to be going back and forth between adjusting things in the flowgraph and interacting with it with the Kinect. With two people, one could be interacting with the Kinect, while the other stayed at the laptop and made adjustments based on what was happening. This sped up the development of the various interactions greatly, so that we could refine them and ensure they worked consistently.
With the design team, we where somewhat disconnected for the first half of the project, since the design of the movement system and the design of the bathrooms did not require much collaboration. Once we moved on to the Kinect interaction, however, we began to work more closely together. The design team provided updates each week showing us how the bathrooms where looking and the surrounding environments. We used these to ensure that the Kinect systems we where developing would be appropriate to the fixtures they had in place. They also provided us with a set of basic interactions done using the keyboard, that we where able to modify and integrate with the Kinect sensor to function correctly.
They provided a large amount of details on the bathroom design via the wiki, such as information about the various fixtures, plans of the bathrooms, and descriptions of the basic interactions. This was very useful throughout the development of the interactions, since we could already understand how the fixtures could be mobilized and what the existing flowgraph nodes did. It also helped when we where setting up the pseudo-bathroom to demonstrate the Kinect, since we already had a plan that showed us the dimensions of the bathroom and the locations of the various objects.
The other way we collaborated was via a Facebook group. This was particularly useful when organizing things like the Intellectual Property presentation as well as the times when the whole group met up, such as when setting up for the presentation. I think the important thing to realize about Facebook is that while it can be useful for some areas of collaboration, it's almost impossible to use it for others. It was excellent for collaborating on the IP presentation, since all we needed to communicate was text and images, but it would've been impossible to collaborate on the Kinect programming, since that requires large files being modified and constant testing and adjusting.
The combination of the wiki, Facebook, and face-to-face meetups allowed our group to effectively collaborate along the course of the project.


Future Considerations:

If we where to repeat this project, there are a number of things I would do differently.
The main problem was that we spent to much time initially on what should've been a small aspect of the project, the movement system. Next time, I would evaluate how long I wanted to spend on that system and then choose a kind of system that was appropriate for the time-frame. This way we would spend 1-2 weeks figuring out a movement system, most likely the 2-axis powered hinge, then work on the Kinect interactions until the part arrived. This way we could begin integrating the movement system while working on the Kinect interactions.
In a more general sense, a greater level of planning prior to starting the project would be beneficial. Having a more clearly defined target and ensuring that we lay out the steps that are required to get to that target would decrease the stress and confusion within the project, as well as providing a metric that allows us to continually evaluate how our project is progressing.


Final Summary

This project has created an effective method to connect 3d tracking technology to a virtual environment. It has also created a framework of interactivity which demonstrates how a user could potentially interact with a physical bathroom in a natural way. The project has been documented on the wiki to ensure that anyone in the future with an interest in this area has a resource they can refer to, and can contact us via the comments if they want more information. 
The project was an interesting learning experience, and allowed me to explore new areas of design I hadn't previously considered. My hope is that the elements and concepts within this project will continue on and eventually be fully realized as a commercial product that enables people to interact with their environments in an intuitive and useful fashion.

Tuesday, June 4, 2013

Week 13

This week was originally going to be the project presentation week, but due to some unfortunate problems with the CryEngine 3 software, which five of the seven groups where using, only two groups where able to present. Our group was ready to present, and even had a video to demonstrate the project if the live demonstration didn't work for some reason. However, we decided it was best to have a live demonstration of the project, and so we agreed with the other CryEngine groups to postpone it until next Tuesday.

Tuesday, May 28, 2013

Week 12

This week we refined the bathroom interactions, as well as adding a new interaction.

  • Light brightness control: We decided to add another gesture-based interaction to control the brightness of both the main light and the mirror lights. This gesture simply involves placing both hand at shoulder height and moving the apart to brighten the light, or together to dim it. The control switches between the main and mirror lights depending on where the user is standing, and once the user drops their hands, the light locks to the current brightness. 
We also corrected some inconsistencies with the falling detection and improved the smoothness of the sink and toilet movement.

We also, at Russell's suggestion, added more direct feedback on what was happening within the bathroom. This can be partially seen in the videos that have been uploaded, it effectively lets the user know what's happening. For instance, when the temperature is being changed, there is a number that indicates the current temperature. There's also text that lets the user know if the bench height is currently tracking them, and whether they're in a position to adjust the main light or the mirror light.
We also placed the "Calling Ambulance" message in the same system, but gave it a more prominent position.

In preparation for the presentation in week 13, we set up an area in the class-room to simulate the bathroom. We outlined major features like the shower and sink with masking tape on the ground, a chair for the toilet, and tables to represent the walls. We then shot the following video to comprehensively cover all the interactions, in case we where unable to get the live features to work during the presentation (This is where we filmed the feature videos for this and the previous post as well.):



For the presentation we decided that for our part, a live demonstration would be by far the most effective way to present, as well as asking Russell or Stephen to use it to see how it works. I'll be the demonstrator, while Steven talks about what I'm doing.

Thursday, May 23, 2013

Week 11

Bathroom Integration

This week was spent combining our Kinect interactivity with the bathroom models created by the other team. We began by positioning the player character within the virtual bathroom, which required some scaling and careful positioning. Once this was done, we began implementing our ideas regarding the interaction with the bathroom.

  • Lights: The first, and theoretically easiest, task was to control the lights with the presence of a user. The lights will only turn on if the user is within the room, and the mirror-lights will turn on when the user is near the mirror. The room lights turned out to be more difficult than was initially thought, since the Kinect's range was not large enough to scan the entire room while being able to sense a person leaving the room. Unfortunately this is more of a hardware problem, which is difficult to rectify given the closed nature of the Kinect hardware.
    The mirror lights work quite well, once it was realized that the distance between the mirror and the user was being measured from the corner of the mirror.
  • Temperature: The next interaction was controlling the temperature using our custom-built gesture.
    Since this had already been used to control the size of an object during initial testing, it was quite simple to use it to instead control the height of a rectangular prism in the corner of the shower, to represent the temperature change.
  • Height Control: This ended up being one of the more difficult interactions to create. Although we'd already created a "bench" that would automatically match your knee height, using this to change the toilet and sink heights was more difficult. For instance, we didn't want the toilet to constantly move around as you moved, rather it should just match your height once and lock at that position. This took some time to figure out. Another issue was only activating them when the user was close enough. This was to ensure that, for instance, the sink did not suddenly lower itself when a user sits on the toilet.
    There was also the problem of the bench going to extreme positions when the Kinect was unable to track the user, so there are now limits in place for both the bench and the toilet to ensure they maintain reasonable heights regardless of any extraneous input.
  • Fall Detection: One of the interactions we discussed very early on in the project was detecting when someone falls down. This was also a difficult thing to detect, since the Kinect cannot properly sense a person lying down, their limbs are less defined, and their angle to the camera means there are often parts of the body that can't be seen. In testing, the representation of the user would glitch and jump around, meaning our expected check of "Is the waist at the same height as the head?" would not consistently work. We developed a process which gets the general difference between the head and waist over a period of time, so even if the skeleton is glitching and jumping around, it can still detect if the person has fallen over and notify someone automatically.

Tuesday, May 21, 2013

Group Presentations Review: Remuneration

Vivid Group:

The Vivid Group did the final group presentation on the subject of remuneration.
Their presentation was inconsistent, some members presented clearly and engagingly, but, possibly as a result of the topic rather than personal choice, went in to far more detail than was useful, creating something of an overload of information, while other members seemed unsure of what they where talking about and where overly vague.
The members in the first group where surprisingly competent at audience engagement, only glancing down at their notes occasionally and speaking clearly and informatively. Those in the second group fell into the common trap of spending most of their time talking to their notes.
The Prezi presentation was well structured but had too much text on the screen, meaning it was difficult to get a clear picture of what they where saying, especially with some members talking at an excessive speed.

The written presentation was detailed but suffered from too many unexplained terms an concepts. It may have assumed to much about the knowledge of the audience, meaning that complex ideas where being explained without explaining the core ideas that they're built on.

They gave some good examples which where fairly well explained, but the examples didn't really relate to their project, so their usefulness was limited.
Their presentation didn't have many images in it, and the ones that where present where fairly generic stock photos, though this is somewhat understandable given their topic.
They did seem to have a decent understanding of the topic, but some of them seemed unsure how to present this knowledge effectively.
With that being said, it is a difficult topic to make interesting or engaging, so some leeway could perhaps be given for that, but I think a simpler presentation with more direct links to their project would've made it much more informative and engaging.

Tuesday, May 14, 2013

Group Presentations Review: Conflict

DCLD Group:

The DCLD Group where the first to present on the subject of conflict. Their presentation was detailed and complex, but suffered from the common problem of having too much text on the slides and a lack of engagement with the audience. Their oral presentation leant heavily on the reading of text from the slides or at least the screen in front of them. This meant that instead of feeling like I was being told something, it was more like I was simply being read something, which I can do on my own. There seemed to be a lack of organisation with the slides as well, as they often skipped over some slides or went backwards to previous slides.
The written component of the presentation contained plenty of details, but in my opinion consisted of more lists than was useful. Lists are good for creating a starting point, giving a basic outline of what topics you're covering, but after that it's more engaging, and easier to follow, to talk about things progressively, moving from one idea to the next smoothly without just jumping to the next point on a list. As an extension of the list problem, there was a sense of disconnectedness between each topic, with little explanation of why they where in this particular order or how one topic followed on from the previous one.
Their examples where good, especially the BIM one, but where not explained effectively, so while I got a good idea of what conflicts might arise, I didn't really understand what the potential solutions where.
They referenced most of their images, but the references where often too small to read.

Their images where decent, but consisted of a lot more text than was useful. In some cases, especially the large flowgraph, it was almost impossible to read the text, meaning the audience had only the vaguest idea how it connected to the topic.
While there was useful information in the presentation, the way it was presented gave the impression that the team didn't really have a deep understanding of the topic and where just reading what was in front of them for the first time.

Kinecting the boxes:

The “Kinecting the boxes” presentation was the second one on the topic of conflict.
Their oral presentation was better than the last groups, as they read mostly from notes rather than off the slides, though there was still a distinct lack of connection with the audience, which could've been achieved by at least looking at them regularly.
Their reading was clear, but not engaging, as if they where simply dictating the text rather than trying to explain something to an audience.
There was also a significant imbalance between the people presenting, as some presented for far longer than others. This may have been due to one person simply being more willing to talk than others, but it did lead to what felt like a less effective group dynamic.
The presentation also went substantially longer than it should've, succinct explanations are far more effective with an audience than lengthy, convoluted details.
Their written presentation was quite clearly written, though it tended on occasion to be oddly melodramatic. They suffered from the same problem as the last group by including far more lists than was useful. They did manage to maintain a reasonable level of flow between the various topics, though having some kind of overall outline would have been useful in identifying this.
Their examples where a bit unrelated, and tended to be very general, rather than project specific. While it's possible they had suffered very little conflict in their group, even theoretical examples are more useful than broad generalisations, since conflict resolution is something that should be considered on a case-by-case basis.
Their presentation was well laid out, but a lot of the images had too much text. The flowgraph was interesting, but took far too long to explain.
It was difficult to tell if they had a good understanding of the topic with so much of the information being read off cards rather than to an audience. While they had a fairly basic view of conflict, they explained it well and gave plenty of detail on the potential resolutions, though there where a few occasions of repeated content.

Monday, May 13, 2013

Week 10

Interactivity Testing:

Gestural Control Test: Scaling a ball using a specific gesture to ensure against inadvertent activation, usable for light or temperature control:

 

Proximity Test:
Using the distance between a specific joint, in this case the hand, and an object to interact with the object, usable for operating taps or opening cupboards:


Skeletal Analysis Test:
Using skeletal proportions, in this case from the leg, to control the height of an object, such as a chair or bench, to make it comfortable to use:

Arduino Testing:

Stepper motor test: