Microsoft’s devices will know what emotion you’re feeling

November 11, 2015

Microsoft have announced updates to Project Oxford, a set of online services that help developers build more intelligent apps with complicated features like the ability to recognise faces.

Project Oxford was announced last spring at Microsoft’s Build event. At the event Microsoft said they’d done a lot of research on machine learning, which most companies couldn’t do themselves. They’ve also got the computing power in its data centres to do the processing necessary to carry out these tasks, so Microsoft can help developers do a lot of things that they would never be able to do by themselves.

New capabilities for speech recognition even in loud/busy places, for identifying the person who’s speaking, for stabilising video, and for spellchecking were announced at the Future Decoded conference in the UK.

But, the coolest update is to the Project Oxford facial recognition service: It can now “look” at photos and rate how the people in them are feeling, ranking them according to emotions like happiness, anger, shocked, or disgust. 

Microsoft Project Oxford

The reason for Project Oxford was to allow programmers to do their own thing with these services. Using the Project Oxford Face API — programmer jargon for the “hooks” that programs use to talk to each other and the web — apps could use this capability for their own ends. For example, programmers might start to make iPhone apps that let you sort all your photos by how happy you look. Or, conversely, you could filter social networks so they don’t show you looking sad in pictures. 

The business applications are equally interesting. For instance, market research firms could build software that takes pictures at key spots during an advert for example and get a more scientific look into how a focus group is reacting. So it’s beneficial in multiple areas!

You can head over to the Project Oxford site and use a simple demo that lets you upload a photo and see how the service rates you on emotional scales.

There are other new Project Oxford capabilities with similar potential for making smarter apps:

Recognising who’s talking. The speaker recognition feature could be used as an extra security measure in the enterprise. The new CRIS, or Custom Recognition Intelligent Service, gives an option to train speech recognition to better understand the unique acoustics of places like loud public spaces. That one will be available in an invite-only beta later this year.

Video editing. The video features can automatically edit video, letting you do things like chop down smartphone videos so it only shows when people are moving in your shot and fix your shaky hands. This will also be coming to developers in beta by the end of the year.

Spell checking. The most useful is a spell checking API service. With this new spelling service, Microsoft can constantly update a dictionary with new slang and brand names, and let developers automatically have the latest versions in their apps, no intervention required. It means a smarter dictionary.

Microsoft headquarters

(Thank you to BI f0r some of the info!)

Share To:
Pinterest Linkedin
  • FAST RESPONSE

    15-minute response time

  • NO LONG-TERM CONTRACTS

    Low Risk & Complete Flexibility

  • CLIENT FOCUSED

    An Extension of Your Business