Mobile-first measurement extraction

This page serves as a preliminary accuracy report and provides insights on Treedy's capability to extract very accurate body measurements from a single low-resolution depth map, like the ones produced by the depth sensors found in modern mobile devices. Treedy's is currently the best provider of technology to enable the transition to measurement-enabled online fashion retail.


1: What we did

First we will take a look at our measurement extraction from single depth maps on unclothed bodies

 

We started by generating 50

randomly-sized humans in “T pose”

1.png
 

Then we created a
set of 83 measurements
on a template human body
using our own measurement
definition software

 

We gave random poses to the

50 humans and created a single

low-resolution depth map

of each one.

3.png
4.png
 

This low-resolution depth data was fed to

the latest generation of our body extraction DNN,

which outputs a complete human body in “T pose”


 

These images show the input depth maps

with the inferred “T-pose” meshes that

were output from the DNN in the same 3D space

5.png

2: What we found

We took these inferred bodies (in green above) and we ran them through our measurement pipeline to evaluate accuracy of the results. Scroll down to take a look!

First off, here is the mean absolute error on each of the 83 measurement locations. As you can see it’s pretty low: most of the measurements’ mean absolute error is around 1cm, and none are above 2cm!

6.png

Looking at the RMSE values for these data points we can see that the distribution of our results is also very accurate for a great majority of measurements!

7.png

This is excellent news and a great validation that our method works, even in this early prototype test!

If we make a small tweak, extracting the measurements in a more appropriate body pose compared to the DNN output, we get the following results on the mean absolute error (most below 1.5cm)!

8.png

Finally, looking at the biometric measurements as percentages of the values and using the interquartile range in %, we get the following accuracy chart:

9.png

As we can see the vast majority of measurements fall close to 1% accuracy on the majority of meshes the test. Where we do have outliers, the external quartiles are within a quite limited spread, well within what is necessary to get an actionable set of data for clothing recommendation!


The red crosses mark extreme outliers, where they exist. These denote human bodies that out inferential model is less apt at recognizing for now, showing us where we need to focus our work to build out this system.

Now bear in mind these results were achieved with:

  1. simulation of very low quality data

  2. using a single instantaneous depth capture with no refinement / blending of multiple poses

  3. using the first generation of our machine learning pipeline, with no optimization!

10.png

This data generated for this test is very similar to what you would get from a 3D sensor found in a mobile phone right now (early 2020).

We purposefully designed the test this way because we firmly believe that 3D sensors will quickly become ubiquitous in mobile devices, as they allow vast enhancements to computational photography at a fraction of the cost of adding more high-density CMOS sensors.

11.png

3. Conclusion

We have shown that we can extract very accurate body measurements from a single low-resolution depth map, like the ones produces by the depth sensors found in modern mobile devices. We firmly believe that these depth sensors are coming to all mobile devices in the near future to usher in a new era of AR content, but also to enable better online fashion retail with access to customers measurements. We will be the providers of the technology to enable this transition.


4: what is next?

The future of online fashion retail will include customer biometrics. The current model of online fashion distribution is unsustainable both financially and ecologically, and whoever can collect this data from customers in the most efficient way will win in the online retail marketplace.

We have the solution, and now we are looking to scale it out!

We are currently raising a series A to build out this pipeline. We know what we want to do, how much it will cost, and what we need to make it happen, and have half of the financing secured as of 13/04/2020.

If you are interested in taking part in defining the future of fashion, contact us at nicolasvh@treedys.com !

index.jpg