What’s New in ARCore (Google I/O’19)


[MUSIC PLAYING] BENJAMIN SCHROM: Hello,
and good afternoon. My name is Ben Schrom. I’m a product manager
on the AR team. I’m joined here by two of my
colleagues, Leon and Christina, and we’re here to
talk to you today about what we’ve been up
to lately with ARCore. I wanted to start out, though,
by giving some quick context. It’s been a big year
for AR, but we’re really just getting started. Last year at I/O, our AR
platform wasn’t even a year old. But everything
we’ve done before, and everything we’re
doing now, really goes back to a pretty
simple question. That smartphone camera
is less and less a camera in any
traditional sense, almost like your smartphone is
less and less a phone in any traditional sense. You really don’t need to
take much of a step back to remember that it was
pretty recently that cameras were for taking snapshots
and not much more. But this idea of AR stemmed
from a collective realization that these cameras we all
carry around in our pockets, they happen to be attached to
these little super computers that are also
packed with sensors. And so the question AR
asks is, what could we do if we thought about
these cameras themselves as one of the richest of all
the available sensors, a sensor that we can hold up
to the world and use what we see as core input? And from that stems, also,
a really pretty simple observation, that the
richer the set of inputs you give a computing device,
the richer the set of outputs it can produce. And this isn’t a new thing. This is really true for the
whole history of computing. The punch card gives way to the
keyboard, the mouse, the touch screen, voice, so on and so on. But given that AR experiences
can be within the world itself, there’s really no limit to the
richness of the experiences we can ultimately produce. To get more specific,
when I say richer inputs, I mean going way beyond the
sorts of things we mostly feed into phones
right now, things like touches, swipes,
numbers, letters, emojis. These are distinctly
not the inputs that humans use to
perceive the world. Humans, we perceive that the
world has three dimensions, that it has things in it
that have shapes and mass. And we’re super attuned
to things like light coming from different
directions and different sources and reflecting off
different materials. But our phones are basically
clueless about these things until quite recently. So when we combine the incoming
visual data from a camera with a bunch of other sensors
and computing resources, like IMUs and GPUs and ML
models and software algorithms, we can begin to give our phones
some of the same understanding of the world that we have. So in AR terms, this means
things like 6DoF tracking, or plane finding,
light estimation, or super precise
shared localization. And once you give a computer
these sorts of richer inputs, you can produce really
powerful new outputs. You can do what stream is doing
and overlay a how-to video about how to change the
oil in your car directly above the place
where the oil goes. You can even call
on a remote expert to help you find specific
parts of the engine right in front of you. Or you could turn a smartphone
into the world’s quickest and easiest tape measure, one
that’s always with you, one that doesn’t always get
lost in your junk drawer just when you need it. Well, within Google
Maps, you can take a set of
walking directions, and you can place navigational
aids right where you need them, within the world itself. It’s like a set of dynamic
road signs placed just for you. You can also create wholly
new types of gameplay, like “Tendar,” a game
from Tender Claws, that uses facial expressions and
emotions to create a narrative around a virtual object. In this case, it’s a guppy that
feeds off of human emotion. Can you imagine how crazy this
pitch would have sounded even in 2015? But it’s totally possible
now, and it’s really fun. Or you can create
the types of scenes previously only possible
within the CGI lab within a Hollywood
studio, where you can place digital characters
into the world such that they feel like they’re
right there with you. We are working so
hard on AR at Google because we believe it unlocks
a new set of applications that are creative and helpful
in ways we could only imagine a few years ago. AR can be uniquely
helpful, because it can present useful, important
information within the most relevant context,
the world itself. And as we said
before, it gives you the ability to use what you
see as a fundamental input into your computing. And at the very
least, it saves you from having to type those 1,000
words each picture is worth. On the creative
side, what you see and what the camera
perceives can become a wholly new mechanic
for games or self-expression. And the creative digital outputs
of artists, storytellers, and game designers can now
inhabit the same spaces we do as people. Enabling exactly these
sorts of experiences is why we are building ARCore. Our goal with ARCore is to
give developers like yourselves simple and powerful
tools for bridging the digital and physical world. So today we’re going to give you
a recap of the progress we’ve made over the last
year, and then we’re going to walk through
a bunch of new things we’re bringing to the platform. To start with, I’m most excited
to say that since last year, we’ve almost
quadrupled the number of ARCore-enabled devices,
bringing that number to an estimated 400 million. And we did this by working
really closely with top Android OEMs to ensure new devices are
ARCore-compatible at launch. That also means the
number of ARCore devices will just keep growing as these
new phones sell in the market. We’ve also worked with lots of
developers, like all of you, to expand the number of
AR applications available. In fact, there’s now a dedicated
section of the Google Play Store that features over
3,000 ARCore applications. Here are some of our
recent favorites. Here’s “Pharos,” which
uses cloud anchors to let multiple players share a journey
through a universe created by Childish Gambino. Or there’s the
ColorSnap visualizer. It’s an app from
Sherwin-Williams that lets you see what
a new shade of paint looks like on your walls without
having to slap the paint on. Or there’s GeoGebra’s
3D graphing calculator that lets you create
3D math plots, place them in your space, and
explore them by walking around. The simple breadth
of things that we’ve seen developers create in
a relatively short period has been remarkable,
and we really hope to see even more
interesting things, some of the improvements
we’ll dive into now. With that, I’d like
to turn it over to one of the lead
engineers on our team, Leon. LEON WONG: Thanks, Ben. [APPLAUSE] Thanks a lot, Ben. Well, I can’t really
believe that it’s been a whole year since the last
Google I/O. And over that time, we’ve released six
updates to ARCore, and we’ve made improvements
to almost every part of the platform, from
algorithmic quality to developer tools,
and we’ve also added some great
new capabilities. I’d love to share some of
the highlights with you today, starting with
improvements to some of the fundamentals upon which
all AR experiences are built. Continuing to improve
the quality of our motion tracking and environmental
understanding algorithms has been a top focus for us. Not only does this create more
reliable and enjoyable user experiences, but we’ve seen that
improvements to the algorithms that underlie ARCore
have boosted ARCore user engagement and retention across
a broad range of applications. One of our biggest
achievements in the last year was improving ARCore motion
tracking robustness by 30%, with a large part of that coming
from better sensor calibration algorithms that
have helped ARCore adapt to the
diversity of hardware in the Android ecosystem. Now, there are always going
to be some cases where tracking simply fails. People will put their
phones in their pockets. People will shake
their phones too hard. And it won’t always be possible
for us to maintain tracking quality in those cases. So we feel that one of the
most important things we can do is educate users about
how tracking works and what they can do to
improve their own experiences. That’s why we introduced
an API to report tracking failure reasons. For example, when there’s
not enough visual texture in a scene to allow our
cameras to track motion, or there’s simply too little
light in the environment, or when there’s excessive motion
that saturates inertial sensors or can cause camera motion blur. By providing this kind of
feedback to applications, we hope that apps will be able
to guide users toward more successful AR experiences. Plane finding is
another experience that’s been a key focus
for our engineering team. Now, plane finding, for a
very large percentage of apps, is one of those things
you have to do in order to place AR content and
begin the overall AR experience that people
are trying to get into. But what’s not
really clear to users is that we need particular
kinds of camera movements in order to find
planes successfully. ARCore triangulates
where visual features are in three-dimensional
space by seeing them from multiple
different perspectives. So large, gentle motions
focusing on the target area of interest really work best. Rather than making users
learn how to do this better, however, we’ve focused on
reducing the amount of user motion that our
algorithms require by increasing the number
and types of visual features that we’re using
to locate planes. This has improved plane-finding
speed and success rates dramatically. For example, in Google’s own
AR measurement application, we saw a 50% reduction
in the amount of time it takes to find
an initial plane. To see this in
action, take a look at the graphic on the screen. It’s really hard
to see, but if you look at how little camera
motion is needed before you see the dots that indicate
we’ve found the floor plan, you’ll see just what
kind of progress we’ve made over the last year. Keep looking. There it is. It’s almost instantaneous
in a lot of cases. Camera quality is
another fundamental part of nearly every AR experience. When ARCore launched, we
optimized camera configurations for visual tracking performance. So we did things
like we fixed focus at infinity to make it easier
to model camera focal lengths, and we tightly controlled
exposure settings, frame rates, and resolutions to prevent
motion blur and limit compute. This made our computer
vision challenges a lot more tractable,
but it really wasn’t ideal for many
end user applications. For example, AR
photography has grown into a really important
use case for us, and we really wanted to
let AR take advantage of more of the
camera capabilities that users expect
from their devices. So over the last year,
we launched a number of important camera updates. We launched autofocus so that
AR photographs are sharp, even when scenes are close up. And we launched a feature
called Shared Camera Control. This is a feature that
lets applications quickly switch between
Visual Tracking mode for the camera and a mode that’s
controlled by the application so that they can choose
to do things like take higher-resolution photographs. And then finally,
we doubled your fun by adding front-facing
camera support, so you can take those
all-important AR selfies. So in addition to working on
the quality and reliability of ARCore end user
experiences, we also invested in our
development tools to help application creators
work more efficiently and take advantage of
our latest features. For Java developers,
dealing with 3D graphics can be a real challenge. So we launched Sceneform
at I/O last year. Sceneform makes it easy
to create 3D scene graphs and render them realistically,
all without the complexity of OpenGL. Since our launch of
Sceneform last year, we expanded its capabilities
in a lot of different ways. So for example, we added support
for external dynamic textures to allow you to do
high-quality video playbacks in your applications. We added screen recording
to help developers capture demo videos and
let users share screenshots on social media. And we added animation
support so that your 3D assets can come to life in AR. And then finally, we’ve
tried to keep Sceneform up-to-date by supporting
the latest ARCore features, like Augmented Faces. Now, for developers working
in Unity instead of Java for their application
development workflow, we’ve been regularly updating
our ARCore SDK for Unity, so it always showcases the best
of ARCore’s growing platform capabilities. But because we know that many
application developers are building cross-platform
applications, we’ve also worked
closely with Unity on their AR Foundation package. AR Foundation lets
developers use a core set of augmented
reality features across both ARKit on iOS,
and ARCore on Android, all using a common API so that
you can maintain a single code base for your apps. And to make those cross-platform
experiences even better, we brought key ARCore
features to iOS, like Cloud anchors, which lets
developers create multi-user AR experiences that
are all anchored in the same physical location. Now, effective user interaction
design is just as much of a challenge for AR
as software development. And designers are
still figuring out what’s working best for their
applications and use cases. In order to help
with this problem, we introduced ARCore Elements. ARCore Elements is a
set of UI components that Google has designed and
validated with user testing. You can use ARCore Elements to
insert common AR interaction patterns, like plane finding
and object manipulation, into your Unity apps,
all without having to reinvent the wheel. This helps users learn
actions that they can perform across
different applications, and it also makes it easier to
follow Google’s recommended AR user experience guidelines. So those were some examples
of the many updates and improvements we’ve made
to ARCore in the last year. Now we’d like to share some
of our newest capabilities, including several that
are launching this week. From the start, the
mission of ARCore has been to give
developers the ability to create more realistic
experiences that are available to more
users and in more places. And we wanted to give
our devices the ability to see and understand the world
in much the same way that we do, and to fully engage our
own human senses by rendering digital content in context with
the highest levels of realism. So let’s start by
looking at some of the new visual
perception capabilities that we’ve added to
ARCore to make it more useful in more contexts. By human standards,
ARCore launched with some pretty limited
visual perception capabilities. We could detect
horizontal planes, and soon after, we added
support for vertical planes. And this was really important
for allowing applications to place AR objects
in places like floors, on tables, and on walls,
where real objects often lie. But what we care about so
much more than objects in many of our life
experiences are people. So with this in mind,
we felt that one of the most important
canvases for AR should be the human face. And this really isn’t a very
new idea, if you think about it. We’ve been augmenting faces with
masks, makeup, and face paint for as long as we
can all remember, so it’s not really
that surprising that people are really excited
to take these experiences to a new level in AR. But high-quality,
three-dimensional face perception is a really
difficult technical challenge. Faces are complex 3D
surfaces, and people are highly attuned to the
smallest shifts in expression. Furthermore, faces
are deformable, and face-tracking
solutions need to work across diverse face shapes,
hairstyles, skin colors, and age groups. So to solve these problems and
help a wide range of developers be able to launch
face-based AR applications, we launched Augmented
Faces recently. Augmented Faces for
front-facing cameras provides a high-quality,
468-point, three-dimensional mesh that tracks head
movements and changing facial expressions. Best of all, we use
machine learning, so this works on devices
without depth sensors. Now, if you think about
the level of realism that you can achieve between– the difference in
realism that you can achieve between a
plastic mask and motion capture-based CGI
effects, you’ll start to understand why we’re
so excited about what developers are going to do with these
high-quality face meshes. Augmented Faces is
unlocking new use cases for photography, social
media, and commerce. We’re seeing really
strong interest from brands and retailers
for use cases like makeup and trying on makeup,
hair colors, eyeglasses, and accessories. And of course,
people are creating tons of fun photos with
everything from beauty effects to face morphing,
and so much more. In fact, interest
in Augmented Faces has been so high
that we’ve decided to bring Augmented
Faces and make it available on iOS this summer. It’ll have the same high-quality
468-point face mesh as Android and will work on all
ARKit-capable devices without requiring
a depth sensor. And after we launch,
developers will be able to create Augmented
Faces applications that have the potential to reach
a billion users across iOS and Android. One of the first cross-platform
experiences to take advantage of this will be Meitu’s
BeautyPlus application, which will feature– I see there’s some
Meitu fans back there. Right on. And so BeautyPlus will feature
a number of great face effects, including this example, which
is a little gem that Ben’s been playing with. In the experience here,
it tosses a birthday cake in your face. And true story– we
actually couldn’t figure out how to make it work. We tried and tried,
and then we realized that the trigger for
the birthday cake is actually when you open your
mouth in order to blow out the birthday candle. Perfectly natural, but
it’s that little moment of surprise and delight when you
figure that out that represents the kind of moments we hope
more developers will create when they use Augmented Faces. And if you’re here
with us on site, please try Augmented
Faces out for yourselves. There’s a demo out in the
sandbox area behind the tent, and you can try a
photo booth experience where you can create some
neat selfies that you can share with your
friends and followers. So now I’d like to talk
about a different class of visual perception that
we’re adding to phones. And we think it has just as
much potential to be helpful and fun as Augmented Faces. And that category
is Augmented Images. Think about all the 2D
images in our space. There’s maps, signs,
posters, labels. These are the main ways we
annotate our physical space with information. They’re cheap, they’re easy to
print, they’re easy to place, and they’re everywhere. Now, think about
how much more useful these things would be if
your phone could recognize and transform each one into the
anchor for an interactive 3D experience. That was the vision
that prompted us to develop Augmented
Faces and launch it earlier this year. Version 1 of Augmented Faces
had some limitations, however. It used image
detection and object pose estimation
algorithms that were too expensive for us to run
on every single camera frame. So what we did was we
actually used ARCore motion tracking to do 3D
updates to render 3D perspectives on your AR
content as your device moved. So the results here
were very high-quality, but this only works when
your target image is static, so it limited the set of
use cases we could support. To overcome this
limitation, we significantly revamped the computer
vision algorithms behind Augmented Images. And in ARCore version 1.9,
which is launching this week, we’ve added the ability to
track moving target images. Along with these
algorithmic changes, we’ve improved image
detection recall by 15%. We’ve boosted tracking
precision by 30%. And we’ve gained the ability
to track multiple objects in the same frame. So with moving
augmented images, you can now do things like attach
AR content to movable objects, like product boxes, printed
documents, and game pieces, like in this
example from JD.com, which is a children’s
spelling game. In this game, once you
spell a word correctly, it gives you positive
feedback by showing you that word in action. You can also do things
like alter physical reality with moving augmented images. So in this example called
Notable Women, which is a collaboration between
Google Cloud Creative Lab and Rosie Rios, who was the 43rd
treasurer of the United States, the application highlights
the achievements of notable historical
American women by swapping their
faces onto US currency. The level of
realism here is only possible with the improvements
we’ve made to augmented images. So that was just a
quick introduction to Augmented Faces and
Augmented Images, but if you’re interested in learning more
about these technologies and using them in
your own applications, please join us at tomorrow
afternoon’s dedicated session. So now that we’ve
had a chance to look at some of ARCore’s new visual
perception capabilities, I’d like to turn things
over to Christina, who’s going to talk about the
ways we’re bringing greater realism and utility to ARCore. [APPLAUSE] CHRISTINA TONG: Thank you, Leon. One of AR’s fundamental
goals is to blend the virtual with the real. I want to really believe
that that virtual pet is here with me. And that couch that I’m
thinking about buying, I want to see it in
AR in my living room as if it were actually there. And realism really
matters for immersion. Just think about the difference
between great and not-so-great CGI in the movies. Having realism
really helps to keep users grounded in
that experience and engaged with
that experience. And one of the key
parts to making AR real is to get the lighting right. Let’s take a look at why. So take a look at this picture. What do you see? I think it’s a
pretty simple photo. We have a chair, a
plant, and a mirror, all against a simple
wooden wall here. But there are actually so
many human perceptual cues that we see in the scene,
and we think about these and use them subconsciously
to understand what’s happening in the scene. One of those perceptual
cues is specular highlights, which are shiny spots that
appear on objects when light illuminates them. Another perceptual
cue is shadows, areas that are darker
because less light falls on them, because that light
is blocked by other objects. We also see
differences in shading. Some objects are angled
differently from the camera. They are farther
away or closer to us, or they have different
material properties, like being less or
more reflective. So we can see that even in
a simple image like this, we actually have so many
different perceptual cues that we use to understand
what’s going on. These are the inputs
to our understanding. And the output is
that we actually understand that in this
scene, the light is coming from the front-right
of the scene, and that the light is
actually pretty bright. Now, what if we wanted to add
an AR object into this scene, but maybe an AR object
that we wouldn’t normally see in a scene like
this, like this rocket? Now, note here that the
rocket is pretty shiny. It’s very reflective. It has details like
the rivets, and it’s got a reflective window. The rocket’s also casting
a nice, soft shadow on the ground. We’re going to put this scene– this rocket– into the
scene that we just saw, and we’re going to use a very
common heuristic to light it. That heuristic will be to
take the average brightness of the pixels in
the scene, and we’re going to apply that to the
rocket’s surface equally as ambient illumination. Now, ambient light gives
the same light and intensity to every object and every
surface in the scene from no particular direction. For a shiny object
like this, we’re going to see that using ambient
illumination only to light the object doesn’t necessarily
result in the prettiest result. As you can see here, this
doesn’t look quite right. Now, what is really
great is that the rocket is anchored onto the floor. One of the first steps
towards having a realistic AR experience is to actually
have your objects look like they’re sitting there
and grounded on the floor. But this rocket
doesn’t have a shadow, and it looks really dark,
because its material here is kind of dark, and
there isn’t enough energy from the ambient intensity
only to create that shininess. Now, there are some
tricks we could use to fix this, like
lightening the overall material. But then the rocket might look
too bright in other scenes, and we still
wouldn’t capture any of the shininess or
the shadows that we want to see on a real object. What we would really
want, ideally, is to be able to
actually understand where light is coming from
in the scene from 360 degrees and in high dynamic range,
which is the range of light that humans see. Ideally, we would
also want ARCore to do this for us out of the box. The result might look
something like this. On the right-hand
side, we can see that this rocket
looks much more real and integrated into the scene. Notice how the shadows
on the legs of the rocket actually match the shadow
direction coming off of the legs of the
planter in the scene. Notice how the specular
highlights on the rocket also match the direction
of the light coming in. As much as you might be able
to believe that a rocket would actually be in a
scene like this, this one really looks
like it’s there. So today we’re excited to
announce new ARCore APIs that will allow you to render
realistic AR assets like the one on the right. In fact, the image on the right
was captured live on a Pixel 3 running our new APIs. Out of the box, these new APIs
will provide three things– directional lighting, ambient
spherical harmonics, and a cube map for reflections. So let’s walk through
what each of those APIs can provide us by
looking at a live demo. We’re going to welcome Ben back
to the stage to help with that. [APPLAUSE] BENJAMIN SCHROM: Thank you. CHRISTINA TONG: Now, the first
thing that we’re going to do is to place the rocket
back into the scene with ambient illumination only. As we can see here,
it doesn’t look very grounded or realistic. There’s no shadow. But one of the simplest
things that we could do is to add in a shadow. Unfortunately, without
these new APIs, we wouldn’t actually know
where the strongest light in the scene is coming from. But we could do the simplest
and naive thing here, which is to put a single
directional light from directly above, and going
down at the ground. So let’s take a look at
what that looks like. It’s looking a little better. We can see the
specular highlights on the top of the rocket,
and we can see a soft shadow on the bottom of the rocket. But as Ben will show
you, that shadow doesn’t actually
match the direction of the shadows that are on
the table and on the chair. So in a second,
we’re actually going to turn on the
environmental HDR lighting, and we’re going to see how
this shadow will actually immediately change to match
the shadows in the scene. And we’re going to see that,
because we’re understanding light’s direction in the scene. So as you can see here,
using machine learning, our algorithm has actually
noticed where the strongest directional light is coming
from and rendered that onto the rocket. Now, you can see the shadows
in the same direction, but you’ll recall
that the rocket is supposed to be pretty shiny. And this rocket that we
see in the scene right now isn’t quite shiny yet. What we need is a way to
get realistic reflections from all directions. And to do that, we’re going to
turn on the cube map, which is also provided by our new APIs. Now we can see that
it’s turned on. You can see that the
shadows, reflections, and the lighting on
the rocket really match those of the surroundings. Even in the stage
lighting that we have here, which is a
fairly unnatural environment to be standing on
a stage, we can see that our estimation of where
the light is coming from really works. Thank you, Ben, for the demo. [CHEERING AND APPLAUSE] So one other thing to note is
that the lighting on the stage was pretty static
during the demo. But in the real world,
lights are often dynamic. Let’s go back to
the slides here, and we’re going to take
a look what happens when the lights are dynamic. So environmental
HDR lighting also works even when the lights are
moving around in the scene, as you can see right here. Now, one of these
figures is virtual, and we’re going to take a quick
vote to see if you can tell. Raise your hand if you think
the one on your left is virtual. All right. Raise your hand if you
think the one on this side, on your right, is virtual. All right. And raise your hand if
you can’t quite tell. All right. So we fooled some of you. The one on your right-hand
side is the real mannequin, and the one on
your left-hand side is a virtual mannequin
placed in AR. And we can see that
the lighting on both is reacting dynamically,
realistically, as the light in the scene
pans from left to right. And this footage was also
captured live on a smartphone. If you’re at I/O, you can
see this scene for yourself today in the sandbox. Now, you might be
wondering, do I have to know all about
specular highlights, and shadows, and the rest to
be able to use this in my app? Don’t worry, you don’t. We take care of that for you. So these APIs will work
for you out of the box. But I want to give you
a little sneak peek at the challenges
we overcame and some of the machine learning magic
that we use for this feature. So we really had to
overcome two key challenges. The first challenge is
that cell phones have a really limited field of view. In fact, your cell phone only
sees 6% of the 360 degrees around you. The second challenge
is that phones see in low dynamic range. So you and I, humans,
we can generally see very bright and
very dark, and we can tell the difference
between those extremes. But your cell phone can see
some bright and some dark, and the range between
those is not very large. That’s low dynamic range. So our challenge was to convert
a single, low dynamic range frame that sees 6% of the
world, and extrapolate from that 360-degree
lighting in HDR. So we do this with
machine learning using a TensorFlow-lite
neural net and training samples like
the one pictured here. So you don’t have to worry
about sensing perceptual cues and translating that
into AR lighting. We’ll take care of that for you. If you’re interested in learning
the details in how we developed this feature and how
to use it in your app, please attend tomorrow
morning’s dedicated session. So again, that’s
environmental HDR lighting, coming this summer
to all ARCore phones. It will help your users
to experience true realism and immersion in the
apps that you build. Now, we’ve just talked
about exciting features, like realistic lighting,
Augmented Faces, Augmented Images, and more. But we also want to make
it really easy to bring ARCore features to your users. We want to make
it easy for people to access compelling AR
experiences from your website. So today, we’re
introducing Scene Viewer. Scene Viewer is a
3D and AR viewer that runs natively on Android
and allows users to seamlessly put any 3D content from your
website into your space, just like this penguin. Now, this penguin
is being launched straight from Google Search into
my space in its lifelike size. You may have seen something like
this in the consumer keynote this morning. Now, what’s really important
is that the best of ARCore will work out of the
box with Scene Viewer. That includes motion tracking,
plane finding, and more. And the environmental
HDR lighting you just saw will be coming soon to
Scene Viewer as well. You can even add
in-context titles and call to actions
for your users. In the flow that’s
pictured here, I’m thinking about
buying a new chair, and I want to put
it in my bedroom. To do that, I want to use
Scene Viewer to actually view that chair in my space. So in the right-most
image here, we have the AR chair placed
into the real bedroom. And the nice thing is
that from that AR view, I can actually directly intent
to the URL that will allow me to purchase that chair. Now, let’s take a look at how
easy it is to use Scene Viewer. You may know about
Model Viewer, which is an open source,
cross-browser web component that allows users to see objects
in 3D in the browser. Scene Viewer will work
hand-in-hand with Model Viewer. Whenever Scene
Viewer is available, Model Viewer will
seamlessly intent out to it. All it takes to enable Scene
Viewer is to add two letters– two letters only–
to the HTML tag. Not surprisingly, those
two letters are A and R. Next, let’s take a
look at an example where Model Viewer intents out
seamlessly to Scene Viewer. We’ve been working
with partners like NASA to bring their web
content to life. This model of the
Mars Curiosity Rover goes straight from the
web into your house. Whether it be for
shopping or education, it is often dramatically
more compelling to see objects in
their real-life size, to get up close to
them, and to view them as if they were actually
there in your space. If you’re at I/O, you can also
experience the Mars Curiosity Rover in Scene Viewer
in the AR sandbox. To learn more about how you can
build amazing AR experiences, attend some of these
upcoming sessions over the next few days. And you can check
out our ARCore demos in Sandbox B. You can
also come to our code labs for some detailed
tutorials on how to use these features in your apps. So thank you so, so much
for listening to this talk. We’re really looking forward to
see what you build in ARCore. Thank you. [MUSIC PLAYING]

20 thoughts on “What’s New in ARCore (Google I/O’19)

  1. When will be ToF sensor support added? It was presented in Huawei presentation that it is coming to ARCore…

  2. What's the current sCore (integral summation in an AR-VR report template) of the DOW JONES, Google?

    Google (in Mark Zuckerberg's voice/dialectic-waves):
    "Elon, the current average of the DOW JONES is…."

  3. My problem is that the playground app is not compatible but arcore is. So I only get less than have of this project. It's a let down. I can't us ar stickers.

Leave a Reply

Your email address will not be published. Required fields are marked *