HandySigns VR

HandySigns VR

Before MagicMods, the goal was to become a professional musician. A change of carrier plan was due when my earing started to become problematic. During that period, the thought of learning sign language sounded like a smart thing to do. It stayed as thought until the day I needed to learn how to make a game in VR for the an other project.
I was particularly interested to learn about the Oculus Meta SDK for the Quest2 headset and its handtraking capabilities.

After some exploratory work on the Unity game engine, I began to see the potential of what a standalone headset like the Quest could bring to the sign language community. Learning sign language is notoriously hard, with a steep learning curve. And most of all, usually learned out of necessity. If somehow you could learn the basics while having fun, it suddenly becomes much more accessible.

First, lets decompose what sign language is

1 Alphabet (Fingerspelling):

    • Purpose: The sign language alphabet, or fingerspelling, is used to spell out words, usually proper nouns (like names) or technical terms that don’t have an established sign.
    • Usage: Fingerspelling is often used when there is no existing sign for a word, or when clarifying spelling. Each letter of the alphabet has a corresponding hand shape.
    • Differences: Unlike the rest of the language, fingerspelling is not used to form entire sentences. It’s slower and less fluid compared to signs that represent words or concepts directly.

 

2 Signs (Words and Concepts):

    1. Purpose: Signs represent specific words, phrases, or concepts. These are more fluid and efficient than fingerspelling.
    2. Components: Signs are made up of five key parameters:
      • Handshape: The shape your hand makes.
      • Location: Where the sign is made in relation to the body.
      • Movement: How the hands move (e.g., up, down, circular).
      • Palm Orientation: The direction the palm is facing.
      • Facial Expression: Adds context, tone, and emotion.
    3. Differences: Unlike fingerspelling, signs are quicker, more expressive, and form the bulk of everyday conversation in sign language.

 

3 Facial Expressions and Body Language:

    • Purpose: These are crucial in sign language to convey tone, emotion, and grammatical nuances. For example, raising your eyebrows can indicate a question, while a particular facial expression might change the meaning of a sign.
    • Differences: Unlike spoken language, where tone is conveyed through voice, sign language relies heavily on visual cues to express meaning.

Finger Spelling

Each sign language, such as American Sign Language (ASL), British Sign Language (BSL), or others,
has its own unique signs and grammatical rules, making sign languages as diverse as spoken languages.

Solely focusing on english for now, with ASL for American. ASL only use one hand finger spelling, has opposed to two for BSL (British), which simplifies the recognition mechanism for finger spelling. Moreover, due to the current SDK limitations, certain scenarios involving overlap and contact between hands are problematic. Otherwise, finger spelling recognition is quite straightforward in terms of mechanics. Create a database of bones transforms* for each characters using Unity’s Scriptable objects. Compare current values of the bone transforms to database. (Inspired by Valem, an excellent resource for Unity/VR development.) Performance can be found by discarding potential matches early.

The current status is “in progress”. More can be done but I believe it is best to wait for Meta’s SDK updates. Gesture recognition can play a big part for user interaction and I expect to see rapid evolution in that area. I have also been pondering about using OpenXR to be able to run on other hardware and maximize compatibility.

Signs

For signs, things are more complex. A very different approach is necessary. More about it in part 2.
For now, focus has been redirected towards Ui,  game mechanics and finger spelling without any external parameters.

App Goal

  • Learn Finger Spelling in VR while having fun.

Principle

  • As simple as it can be while having a clear level and difficulty progression.
  • Offer different ways to help mimic and learn hand positions.
  • Being able to see a sign from a First Person point of view like having a “ghost hand”, might be very helpful for some people.

Levels

  • Beginner : All the helpers visible
  • Intermediate : Show helper only once
  • Expert : No helpers

Difficulty

  • Time allocation decrease (Unpractical until fast and reliable recognition system)
  • Number of characters per words increase

Content

  • Randomly generated names
  • No cosmetics

* Bones Transforms :

In the context of a game engine, “bones” are part of a skeletal animation system. Imagine a character model as a puppet. The bones are like the puppet’s strings and joints, controlling how it moves. Each bone represents a joint or segment of the character’s body (like an arm, leg, or spine) and is used to animate the character. And “transform” refers to the position, rotation, and scale of an object in 3D space. So, when we talk about bones’ transforms, we’re referring to how the bones can move (position), rotate, and change size (scale).

Details

Date:  11th August 2021
Skills:  VR, Research, Software
Tags:  Sign Language, Unity, VR
Client:  MagicMods