Dom Steil

Hololens Development

The Microsoft Hololens enables an entirely new immersive experience with computing. There is yet another digital layer that we have the ability to tap into and build upon. It is profound when you deploy a holographic model from your 2D computer to the 3D world.

What makes the hololens different from other types of computing platforms? What if holographic images became ubiquitous? Are holograms anchored to points in the world the future of distributed computing. What about holographic bitcoin nodes? (Maybe a stretch).

Disclaimer: Prior to the last month, (late July/ early August) I had never programmed in C# or with the Unity platform. I could have used Javascript but everything I read said that specifically for this type of development it is best to used C#. Compiled, more powerful etc.

I wrote this article on my phone as I built out the project.

From the start I did not want to think about the end result of seeing something floating in front of me and being able to interact with it in 3D space. Let alone voice activation built on NLP that could connect the Hololens to the REST APIs of my Bot.

Still I was able to get a digital object in front of me. The first being a cube, then a plane, then planes, then a bike, then bikes.

I prefer to think in the different components the HoloApp needs to accomplish the goal of ar cpq (Augmented Reality Configure-Price-Quote).

I know the NLP has to be trained to match the right utterance. I know that Bot has to gather the right parameters to be serialized in a JSON object. I know the scripts that allow me to interact with the Holograms have for the most part already been built. I know that the 3D models from TurboSquid cost money and it will be tough to build a compelling demo with the free ones. I know that the GUIs will need to look great but also be easy to interact with for someone who has never used a Hololens. That the story of configuring a product in the Hololens has to make sense.

The things I dont know about; how to consume our APIs directly, yet, or how to use spacial audio and spacial mapping and incorporate it into the demo.

The Hololens is amazingly beyond what your base ideas of holographic technology is. In a sense any prior exposure to what you would think as the ultinate vr future is limiting. With this in mind, it is important to understand that tapping into this digital layer requires you to think about the expirience differently.

One of the GUIs was all I had to figure out to realize that this new form of computing was the early stages of tech. It is early. Very.

Ironically, it put into perspective that yes we are living in a time of incredible technological advancement, but still, it is very early. The GUIs are surfacing the state of underlying data sources. This could be a record in Salesforce, data from an entity in Dynamics, data from a Smart Contract on the blockchain. The GUI in VR World needs to be diegetic. The cards need to be interactible, by gaze, gesture, speech; they not only look cool but are there to augment the expirience of the user.

It is not easy to type in VR.

This is another reason why bots and VR are a catalyst for each other in that the NLP used in bots will be used in VR experiences.

The other interesting concept in VR is scale. An object’s scale and distance from the main camera makes all the difference. Being able to grasp what distance and scale any object is will make developing for the Hololens much easier.

Another component of this world which I am just getting into is Raycasting. Being able to cast a ray from the main cameras eye to an object should affect the objects state, look, feel in VR. By having the light hit a gameobject witg another certain material applied is what makes the experience unbelievably realistic.

The mesh filter creates a skin or surface and Mesh renderer determines shape color and texture. Renderers bring the MeshFilter, the Materials and lightning together to show the GameObject on screen.

Animation is next. I need to be able to have voice activated animation drive the configuration of the product.

One of the reasons being that the airtap motion is very confusing at first for users. The best experience would be having an object expand into its components for configuration, having the user select different options, and then validating the configuration bringing the model back together.

The way to achieve this would be to have the game recognize keywords or intents via LUIS, and then have various scripts applied to the game object that make it interactible.

A manager game object.

Animations can be achieved with scripts or with the animator in Unity. I tried understanding the keyframes and curves but still have not figure it out yet.

I need animation to show when a user either hovers over or air taps a selection for it to change the corresponding object in the configuration.

I achieved this using renderer.enabled onGazeEnter. I then moved the 3D box collider corresponding to the different objects out to where the tile was.

I have the select options down, either by voice, air tap or clicker.

I also added in Billboarding to the quote headers so it always faces you as you are configuring the product. I added in sound so the user knows when they make a selection and lastly I am working on getting the 2D images of the options into the tiles in addition to making the tile green upon selection.

Actually pretty tricky, but ill figure it out.

When calling the NLP from within the hololens via direct line REST API, there needs to be an avatar the user can speak to.

This concept of having the audio come from an actual thing gives it a persona. The next couple steps of the demo are creating the cards for every object which will be rendered based on the voice commands given to the avatar. Once this is complete I will need to work on the plane scene. Lastly, next week I will begin on adding our CPQ APIs to the existing bike demo.

32 Days until Dreamforce After a couple hours I figured out turning the tiles green and replacing original blue tiles on selection by using getComponentInChildren and rendering enabled true or false on airtap.

About 3 weeks left til Dreamforce.

I just started on the configuring the inside of a plane demo. Again, getting the objects to the right scale and the right distance from the camera is key. We now have a CPQ webservice hosted on Azure which we are calling using a Unity WebRequest.

The next I have to do is work on rendering different textures of a GameObject on hover.

Also, do I call the webservice directly with a Unity Web Request or do I hit the Bots API which then calls the WebService using the Unity WebRequest. Probably the latter.

Other than that, its now a matter of just dialing everything in; I have the right assets, digital and physical, the scripts are there, its time to put everything together.

Ok, so actually it was the first Option. I called the CPQ Webservice using a UnityWebRequest and a Coroutine OnStart to Create a QuoteId and CartId and retrieve the options for the bike bundle.

I put this C# script on the Manager object. The other WebServices such as add option, remove option will probably remain on the gesture handler script. Sinilarly a CoRoutine is called OnAirTapped.

Tomorrow I have to work on parsing the JSON response from the web service and bind the option Ids to the gameobjects in Unity.

Once this is done then I will create the action to call the finalize web service. After that, well, thats when we start to get into it. Makin it POP. Then ofcourse running into problems x,y, and z but ultimately heading in the right direction.

I ran into a few serialization errors, to say the least. After hours of testing different scenes, debugging, I have a stable project back.

The next part I have to develop is the game loop and surfacing the data in the GUI. The game loop takes an index of scenes compiled and directs between them. Advancing levels in a game, same concept. The 0 level I need to finish is a switch case where if bike A is selected go to scene A, if bike B, scene B, C, scene C.

The three bikes will be rotating and upon selection it will LoadApplication (scene#).

The other game loop will be to Finalize the configuration. Upon the user saying “Finalize”, the KeywordManager will set the Finalize gameobject to active. I have an if statement in the gameObject that is this.SetActive (true) then StartCoroutine Finalize and LoadApplication (0).

Possibly some other cool stuff such a sounds and showing the quote etc. before going back to the selection scene.

The Plane is still a work in progress. It is a different experience because of the small field of vision of the Hololens. It is still very cool but I am working on a solution to make it more of an AR expirience vs VR. Overall, both projects have come a long way. Leveraging the Holotoolkit, exporting Unity Packages, googling and reading books on C# and Unity all have had huge impact.

There is about 10 days until dreamforce and the last 10% of any project like this is the toughest.

It should come down to the wire, it always does.

Last night I might have figured out a powerful and reusable way to build Augmented Reality components. The Get Renderer.enable can be applied for any sort of dynamic action. Here are my notes:

var x = GetComponentInChildren <Renderer> {}

x.gameobject.renderer.enabled = true

x.gameobject.renderer.enabled = false

everything is a game object.

overlay the options for the bike selections and on selection render.enable = true for the corresponding text.

Essentially by having the different object within a parent rendered based on conditions being true or false in the parent can tie together the children components i.e a change of color and text showing up.

I still have some testing to do but I think this may enable a lot.

I have roughly a week to dial everything in. The plane will be today. Working on rendering different textures. Other than that it is moving the colliders for the other bikes and creating a bike select script.

I have about 3 days left and the last week has been a gamechanger. With some help(would not have been possible without them), I was able to build and deploy the last parts of the project.

Item 1) Serilaization With Newtonsoft on UWP takes a little toying with. Download the package and a portable path and download a seperate dll I will attach here.

Net: major part of the project.

Binding data to the GUI dynamically. By getting the GameObject and setting the material.

This was also huge and done with code below:

The Biggest thing just figured out and this was for the plane originally but applies for the bike was being able to for each mesh in parent render.

Today I built the plane scene, added keywords so the options can be said.

Still have to bind the quote number and updated price.

I worked two months day and night to be able to deliver tomorrow.

There is no doubt in my mind this was one of the biggest challenges I have ever had. From learning Unity, learning C#, learning how to Call Apttus APIs, importing 3D models and scaling and texturing them, solutioning the build, designing the GUIs, making it voice activated; overall yes, it was very difficult.

There were two things I will need to figure out with the build. How to render different materials of a GameObject On Tap… but I have… just thinking on it still.

And how to deselect an option in a group before selecting another.

Other than that, yeah, two three months of work. Never thought I’d work on that.

Enterprise Augmented Reality, Apollo.

Top Tips:

Deploy your apps to the Hololens using the USB cord. I was using WiFi for the two months of development and on the last day realized deployment took 30 seconds with a USB. (Depending on the size of the app, it could take 15 -20 minutes over WiFi). TurboSquid –> Blender –> Unity Leverage the Unity Asset Store Export your Asset and project Settings / Create External Packages Final 10% is the toughest Learn:

Shaders Storyboard it out Create Prefabs Export Unity Additive vs Subtractive Color Vector3 Quaternion Mathf Everything is a GameObject Declaration before Definition