Good morning. We will be starting soon. Good morning. We
will be starting soon.
A we’re happy that you will be joining us for two days of
talks, training, demos and more. By now, you’ve checked into
registration to receive your badge. You will need it for the
amazing afterparty at the end of the day. The help desk is
located near registration. If you have any questions or are in
need of assistance, feel free to stop by. All stations will be
taking place in Hall3A or the Jacaranda Room. There is also
training at the conference centre where instructors will
teach you how to use the latest Google technologies. No Google
event would be complete without showcasing the newest products
and technologies, so we invite you to explore the different
demos, office hours, and review clinics in the Sandbox area. Be
sure to check out the community lounge and Google developers and
cloud certification lounge which are located in the Sandbox
area. There will be scheduled
meet-ups, engagement activities, as well as places to just sit
and relax and meet with your peers. We would like to take
this opportunity to remind you that Google is dedicated to
providing an inclusive event to everyone, and by attending, you
agree to our code of conduct, which has been placed around the
venue and on the website. Thanks for attending Google Developer Days India and have a
wonderful time exploring everything that Google has to offer. [Cheering]
Namaskara ﾗ suswagata! [Hindi spoken]. ||TRANSMIT .
. [Cheering] [Hindi spoken]
I tried to speak Hindi. Actually be I have to go out of my
script now because I was expecting a mild response.
Sometimes, I pretend I’m from Kashmir because I have blue
eyes, and I try to speak Hindi, and I guess it worked a bit
today, right! Let’s see if you’re a bit more awake than I
thought. I will stick to English if you
don’t mind because Hindi is not my native language. I will
repeat my question. This time, I’m expecting a much, much
louder response from you. With all the sessions yesterday, the
concert, the free food, how good was the day yesterday? Tell!
[Cheering]. That’s much better. So, I’m Sebastian, and I have a
very complicated last name, as you can see. I’ve been 11 years
at Google, and I guess that makes me a bit old. I currently
lead our developer ecosystem teams across Asia, the Middle
East, and Africa. We try to organise the very inclusive
event over these two days. We have people from all over India,
and even beyond India. Thank you to you for coming all the
way. There are men and many women. In
fact, GDD India is probably Google’s biggest developer event
in India to date, and according to the stats I have received,
we have 36 per cent of the audience who are women. We still
have a lot to do. Thank you. [Applause]. We still have a lot
to do when it comes to gender diversity but getting to 36 per
cent at a major developer event especially in India is already
pretty good, so thank you. Google cares about diversity and
inclusion. When we develop products, we think about the
diversity of our users wherever they are, and whomever they are.
Those who own use mobile phones, those who have disabilities.
Inclusion is also about respecting and including the
diversity of languages, opinions, religions. We want to
help you developers and start-up founders to think about how you
can make your products work for everyone, and also challenges
for different segments of the population. We also want you to
think about how we can create tech communities and engineering
environments which feel safe for everyone. Today, I would
like to talk to you a little bit about an start-up based out of Lang bore to show how
technology can improve the lives of women, especially around
India. It uses artificial intelligence and machine
learning to come up with innovative ways of solving a
major health issue. Let’s start with a number: one out of four –
in India, one in every four women diagnosed with cancer has breast cancer. You this is a
serious issue. In fact, breast cancer is the largest cancer
killer for women today. 500,000 women die every year from it.
Half a million every year. This is one of the leading causes of
death for women in general across the world. It directly or
indirectly affects all of us. We all have mothers, wives,
sisters, daughters, female friends, so we care about them,
and we love them, but having cancer doesn’t have to be a
fatality. It’s actually quite often possible to heal from
cancer. Except in India. In India, one out of two women
diagnosed with breast cancer dies within five years. In the
US, the fatality is less than 20 per cent. In China, 25 per
cent. Early detection is one of the
main reasons for the fall in mortality rates in those
countries. The problem in India is that, for the majority of the
population, breast cancer awareness is almost not existent
until it is almost too late. In addition, there are social,
cultural, and economic barriers. Indians don’t feel comfortable
or have pre-conceived notions about breast cancer screening,
and this results in most people identifying cancer at a stage II
B which is late for a stage for cancer. The survival is pretty
low compared to other countries.
Nirami is a start-up in Bangalore, the CEO on the left
side, and the CEO. The name is an acronym – and it also means a
painless in Sanskrit, or to be healthy. How is Narami solving
this problem? It has built a diagnostic platform using
artificial intelligence to detect breast cancer at a much
earlier stage than traditional methods or self-examination. The
solution is low-cost, accurate, portable, and can be operated
by a simple clinician, and, in the near future, by everybody.
So the start-up’s method of screening is based on the
principle of they rememberography —
thermography. The solution is radiation-free, non-touch, not
painful, and it works for women of all ages. In case you’re not
aware. Mamography only works for women older than 40 years old.
By processing thermal images and using machine learning, you can
get reliable and accurate results, and the best thing of
it all, it is as accurate as mamography, if not better. This
unique solution can be used as a diagnosis test in hospitals to
detect early cancer, and also to deploy this large-scale across
rural areas. Nirami’s innovative approach provides accurate and
non-invasive privacy detection towards breast cancer helping to
identify the disease early in the process. Nira mihas also
been accepted at Google Developers launchpad accelerator
programme. It is a six-month programme that matches top
start-ups from ’em emerging countries with the best of
Google. Our people, technologies to help scale great products.
We are excited today see how we can help Nirami scale their
fascinating solution. I would like to invite the CEO and
co-founder of Nirami on stage. Please give a warm welcome to Dr
Geetha Manjonath. [Applause].
>>Hi. >>Let’s have a seat. Thank you
for being with us today. I have a few questions for you.
>>Sure. >>What or who inspired you to
create this start-up just about a year ago?
>>The inspiration is really about how technology can make a difference. So, in fact, in my
family, very close cousins had this horrendous disease in their
thirties, and I lost them before they were even 40. So,
with that, as a background in my mind, I was thinking how can we
help detect cancer in young women? As early as possible so
that the number of deaths can be minimal. Only cancer that can
be cured completely, and as
researcher in the lab, started working on this problem, and we
got excellent results, and it was a pure coincidence that we
were able to try and address the biggest problem in women’s
health today. >>Now, you have a very
ambitious plans. What is next? What is on the road map?
>>Right now, we have the solution available in a few
hospitals in Bangalore. We are starting to do some more
physical trials in the promising hospitals. We want to expand in
India and beyond, create scaleable solutions where we can
make sure that the data collected is the accurate, there
are good procedures in place so that there are no errors that
are coming up. We can train the technicians, so that they can
select the data well, and the rest is done in the cloud, so it
is just about how we get the data in. So that is the main
thing that we are looking at right now.
>>You’ve got accepted as I mentioned to Google Developers
Launchpad Accelerator Programme. Why did you apply to it? What
are you expecting from it? >>We’re very, very privileged
and happy to be part of the launchpad programme and looking
forward to the exciting time of learning, so, really, this
solution can be used by everyone, right? We are also
creating a very small handle where the solutions can be used
by every gynaecologist and every physician to treat the lady
coming in. The scaleability is very, very key, but who else but
the IT giant can help us reach the scale. So we’re looking forward to working with Google.>>Great to hear. To inspire
you, because you’re a good observer of challenges, what
problem do you face every day in your life that you think nobody
has solved today, and maybe somebody here in the audience
can try and solve it in the months to come.
>>Yes, so maybe, the last 25 years, you know, we’ve looked at
several problems in my IT career, but the problem I’m
trying to see and solve is how do we convince people that AI
really works? Whatever problem you’re taking, you’re dealing
with domain expert – doctors in our case. If you put in a
machine and say it can do 90 per cent as well as you doctors,
how do we convince them? That is a major challenge every day
we’re facing through additional clinical validation and so on.
Try and see how does it prove that it is actually working?
Put test cases and trials in place so that it is actually
proving the decision-making is working, right? So that is
mainly what is different in the solution.
>>What is the best advice you can give to developers?
>>Yes, so, if you believe in a passionate idea, if you believe
that is an idea that you think it will work, don’t let somebody
else try it, because only you know the complete undertakings
and issues that may come in. Pick it up, go for it. You will
be able to solve. There will be several challenges but you will
be the only one who knows how to handle those challenges. If you
believe in an idea, go for it, create a start-up. It is really
worth it. [Applause]. Thank you.
>>This will be my final question: do you have a quote
you live your life by, or maybe you think of often, or a book
that is a reference to you? >>Yes, so I get – two things I
believe in: one is learn every day. Keep looking out for small
pieces of advice, be it from your neighbour, be it from your
boss, be it from your boss, be it from your own small kid,
right? Be open every day to learning. The second thing,
especially when you’re in the third fragment and things aren’t
going well. Everything happens for good. Whatever happens is
for good. So I keep thinking like that, if I’m here,
something coming up later, that keeps us going. These two things
are what makes me run. Thank you.
>>Thank you for sharing these bits of wisdom. Thanks for
coming today. >>Thanks a lot. Bye.
>>>>And now, we are going to hear
about inclusive design: how you can design products with inclusion in mind. Subramanian
is a director at YouTube and will talk about inclusive design
with many interesting, funny, sometimes surprising examples.
Welcome on stage.
>>Thank you. Thank you, Sebastian. It is great being
back in India. I was born in Chennai, and I did my High
School in Calcutta. I was fortunate to grow up in
different parts of the world, including Lagos, Nigeria. So, to
be here today presenting inclusive design, and how we can
build products for all, including emerging markets like
India and Nigeria, is really exciting. As Sebastian said,
I’m from YouTube, and you will be seeing a lot of videos in my
presentation today. Let’s get started with one. Can reroll the
video, please? [Video] music and rap. This is a clip from one
of my all-time favourite YouTube Rewind Videos as it
captures the essence of YouTube. YouTube started with a very
simple mission: broadcast yourself. It has grown into a
global platform with global reach. We have over 1.5 billion
users visiting YouTube every month with over 80 per cent of
our views coming from outside the United States.
YouTube is an open and democratic platform, with over
400 hours of videos uploaded every minute by creators all
around the world. Making YouTube the platform with the most
diverse content. In the time that it took me to say this
whole sentence, almost a day’s worth of videos have been
uploaded to YouTube. YouTube, unlike traditional media, has no
gatekeepers. Anyone can have a voice and reach an audience.
While this is still very true about YouTube, a few years ago,
when I was looking at some usage data on YouTube, I realised as
our platform and usage has grown, human dynamics and
unconscious biases are creeping in. In YouTube, we see biases
and gender gap similar to traditional media. For instance,
a lot of our fashion content is created by female creators, and
a lot of our science content by male creators. This insight
drove me to start a pitch and a dialogue within YouTube, and
across Google, about diversity in the context of product
design. And how unconscious bye-bye are playing into our —
unconscious biases are playing into our product. What of
YouTube for optimising for watch time became more intentional
about a Demme grieving goal, such as gender reach or
ethnicity reach on our platform? This would open up opportunities
for us to deepen engagement and onboard more users on to the
platform, thereby driving growth. For the first time, we
expanded conversations around diversity at Google, from
diversity in hiring and building balanced teams, to diversity
being a critical factor in defining our product strategies
and growth strategies. What is inclusive design? It’s about
engineering product for all your target users across all
demographics. Broadening your demographic reach helps unlock
opportunities that will drive growth for you. Inclusive design
is not just about user experience or visual design, it
is also about the algorithms and machine-learning, about testing
and training data, about how you brand your product, the
marketing, the PR, and more. It is about asking the question for
which of my target users can I be doing more for? It can be in
the form of gender, reach. It could also be that your product
works great when you have good connectivity but not so well
when you’re in the developing world. It could be optimising
your experiences to deepen engagement with a certain
cultural group. We as an industry have been doing
inclusive design for years now in the form of accessibility
work, and this is about expanding that across other
dimensions. For instance, when airbags first came out, they
caused more deaths and injuries in women and children. Why?
Because airbags were tested with only tall male crash-test
dummies. For the engineers who designed this test programme and
plan, were they sexist? No, it was their unconscious biases
that informed their approach. Female drivers tend to have a
smaller build, and, in real-world crashes, they have a
47 per cent higher chance of fatal injuries. Once the
automobile industry realised this, and incorporated women
into their design, and started using female crash-test dummies,
safety of airbags improved significantly for women and, not
just for women, but for anyone with a smaller build. But this
is a great example of gender-based inclusive design. Kids are still a market here,
and an opportunity for them. For this next example, I would like
to start which rolling a video. that … . Does anyone know
what is happening? For this, we need to go back to
the 1950s when Kodak dominated colour photography, and
introduced the original Shirley Colour Card that has become a
starred in photography but has one problem: it works better
with lighter skin tones. So what this means is you often have
exposure issues when taking multiracial photos. This became
a big problem in the 1970s when chocolate-makers and wood
manufacturers were having a hard time creating advertising
material because it was hard to capture the different shades of
browns in their products. Also, as the film and media industry
was starting to become more diverse. Finally, in the 1990s,
a group of engineers decided to take an inexclusive design
approach and introduce multiracial colour cards to help
bridge this gap. While it is improved a lot, there’s still a
slight light-skin bias, and I’m really proud to say that, at
Google, the camera team has taken a proactive inclusive
design approach to help bridge this. Let’s see a video of what
the team has done. Let’s roll the video, please. S one, two, three … . Too dark. Excuse me,
… . … . … . … [Applause]. How many of you
know of Cheetos? Okay, and like them? Most of the room! I
like them too. Did you know when Cheetos first came out, they
were only available in one flavour, the original cheese
flavour? Until a janitor who worked for the company started
adding chilly and lime to the – chilli and lime to the packets
of Cheetos. His friends and family took to it too. He
decided to have an idea of introducing this – pitching the
new flavour to the CEO of the company who actually listened to
the janitor and experimented thereby unlocking new markets.
Every time I tell this story, people want to know what
happened to the janitor. He is now the senior executive at the
company, a true rags-to-riches story and a great case of
culture-based inclusive design. This next example is it near and
dear to me. It is from YouTube. increase the engagement of
YouTube with kids and families, so we brought kids into our UX
labs to see how they use YouTube. But this is what we
normally see. This is how kids see it. They only care about
the content on the screen and nothing else. That is when we
realised to increase engagement with kids and families, we have
to reimagine YouTube from the ground up. We have to build a
special app just for them, and that’s how the YouTube kids app
was born. The YouTube kids’ app provides an easy-to-use,
engaging and enriching experience for kids and families
through a combination of several user experience and
interaction innovations, and also a lot of backend
algorithmic and classification that we did. This is a great
example of age-based inclusive design, and it also showcases
that sometimes, to meet the needs of specific target groups,
like demographic groups, you may have to build something
special just for them.
I wanted to roll a video to show the original launch video of YouTube kids. … … applause
>>How many of you use emojis? Most of you. You’re not alone.
90 per cent of the world’s online population uses emojis.
While there are many to choose from, they’re fairly
stereotypical. Like this is how boys or men are portrayed, and
this is how typically women or girls are portrayed. To bridge
this gap, and to inspire young girls, Google has introduced a
whole new range of emojis representing men and women in
different roles, and different skin and hair colours to be more
inclusive, and this is now becoming an industry-wide
standard with iOS, Facebook, Twitter, and others embracing
this. Again, a great case of gender-based and ethnicity-based
inclusive design. How many of you are
familiar with harassment of women in online gaming
communities? Any gamers in the audience? Let’s see a few. It
has been a big problem in terms of female gamers, and how
hashtags and online commenting is used to harass them. Many of
you might have heard of Gamer gate as well. A few years ago,
we organised a Women At YouTube Hack-a-thon around the team of
bridging the gender gap to bring around organic thinking and
momentum around inclusive design. One of the salient
projects in this hack-a-thon focused on improving our
comments system on YouTube to help women combat harassment and
feel safer on YouTube. As a result of this hackathon
project, we’ve released a lot of features and enHannments to our
comments moderations tools, providing a lot more control to
creators to handle this. A great example of gender-based
inclusive design, helping us deepen our engagement with our
femalefemale creators, and also shows you that grassroots
momentum can be built using things like hack-a-thon to bring
about change. YouTube Go: how many of
you use it? Some. Two years ago, we released that emerging
markets have their own specific needs, with connectivity and
bandwidth issues, many of you I’m sure has experienced the
spinning wheel when watching videos, wait for it to load.
I see some nods and smiles. Also, socio-economic factors.
Not many have the latest smartphones and have hardware
constraints. Also, localisation of the content to make it more
interesting for the local audience. To bridge these gaps,
we built from the ground up YouTube Go that provides
localised experiences and also focuses on offline first, and it
is very bandwidth-sensitive, giving users a lot more control
and transparency over how data’s used within the app. So now
I’ve shown you several examples of inclusive design across
different industries, many from YouTube and Google too, and I
want to switch to talking about how can we do inclusive design
at scale? How can you become more proactive in inclusive
design thinking rather than reactive after the fact?
At Google, these are the different typical phases of
product development that we have, and we’re becoming more
intentional about demographic goals at every phase of product
development. Starting from the idea and target-user definition,
we are looking at our metrics and our business opportunities
to understand what demographic gaps are there within our
target-user segments, and which ones should be prioritised to
try and fix and bridge. As part of target-user definition, for
instance, in addition to saying we are targeting users 18 to 34
of aiming, we say to 18 to 34 with equal representation of men
and women. Once we have picked that demographic goal in that
target definition phase, we carry it through the rest of the
phases of product supreme. In testing, whether we are
selecting users or for user research or testing or running
live experiments and monitoring how the metrics are moving when
you roll out new features, new changes, or internal testing
making sure our dog food users are representative of the target
users we are trying to go after.
A lot of what we do at Google is also about mission-learning and
Al- – machine learning and algorithms. You might wonder how
can machines be biased? They can be because the training data
that goes with them have biases.
I want to show a video of the different kind of biases that
you want to watch out for. Let’s roll the video. … … … … [Applause]. Finally, once a
product is built, it is important that the branding, the
marketing videos, PR, even the support that you offer, are
tailored to echo the demographic goals that you set. Inclusive design really to
work in your companies, it’s really important to make sure
you have diverse perspectives, and you have balanced teams that
you built up. And foster an inclusive culture. But to end,
the Google computer science education and media efforts is
increasing access to technology education around the world to
questions populations. It is raising awareness of unconscious
biases and unconscious bias, training, and also training the
portrayal of engineers in the media because this can be an
extremely powerful influencer. This next video I’m going to
roll will show you some of the work we’ve been doing in
partnership with the media companies. … … … [Applause].
>>So now it comes to you all. Hopefully, I’ve inspired you
about why it’s important to do proactive inclusive design
thinking, and I want to leave you with a little bit of a cheat
sheet of what you can do when you go back to your offices or
colleges. So, think about the demographic gaps that are there.
See how you can measure it and how metrics to go with it.
Prioritise which gaps you want to go after solving, bridge
them, and celebrate your success. Also, foster an
inclusive culture within your teams through things like
hack-a-thons and bringing about inclusive thinking across the
board. As you go through the rest of the day, if you have
thoughts on one thing that you will do differently when you go
back, and you want to share it with us, please tweet it with #GDDIndia. Maybe the next
time we do developer days, maybe one of you will be here talking
about how inclusive design has changed your business and your
products for your users. Thank you. I want to invite Sebastian
back on stage for some wrap-up comments. [Applause].
>>Thank you. Thank you, Sowmya.
I hope you were inspired by this talk, and I had a few closing
remarks, if they could be loaded on my screen because I just
forgot them! One thing is I would like you to enjoy the rest
of the day today. You have a full day ahead of you. The
conversation doesn’t stop with this wonderful event. There are
many ways to keep in touch. Women tech makers, Google
developer groups, developer experts, launchpad, Android
certification programmes, agency programmes. Any of the
programmes that I mentioned to know more, come to talk to us
today or find us online. We are here to help you create amazing
solutions to help solve real problems, and I want to hear
from you. I want to hear these stories. I’ve been hosting
events across emerging countries for the past seven years, and I
have to say, this event is quite something, but it was not
possible to actually host this event without an amazing staff
and production crew, people – I will not name everyone – but
among others, Monica, Peter, Laura, Karthic, many whom I’m
forgetting, but it is thanks to them that we can come together
and share this experience. Let’s give them a warm applause. So, a final few words: bear with
me. [Hindi spoken]. [Cheering and Applause]. Thank you very much.
>>Ladies and gentlemen, please can you make your way to your
chosen break-out session which will begin in five minutes.
Thank you. AMRIT: Good morning, everybody.
I’m Amrit, I’m a developer at Google and we will talk about
performance tooling. Imagine you built an amazing app, and
you’ve now rolled it out to a lot of uses. They’ve installed
your app, and then they tend to start seeing that, even though
you’ve built an amazing UI for it, some reckonings not right:
things are loading up slowly, crashes at sometimes, or it
takes too much of battery, for instance. These are all things
that can hurt the user experience really, really badly
for a user. They might not be able to kind of use the app and
slowly start uninstalling, and you see a bigger uninstall.
If you look at performance, some of the things that we talked
about, it is actually a two-step process. You have to be
proactive about this and, at the same time, be reactive to
changes also. What I mean by proactive here is that you’ve
got to plan, measure, and profile your app before you
publish it. You might want to look at your competition; you
want to look at how well your app is performing; and maybe
benchmark against some of your competition to see whether your
app is meeting the expectations of the user. Once you publish
your app, that doesn’t mean that your job ends there. You need
to kind of continuously monitor and debug any issues that are
coming, so once you roll out your app to millions of
defineses, you need to start that data and start improving
upon areas that you need to, that you find that the app is
lacking at. I’m going to talk about some of the tools that you
can use and techniques that you can use to improve, measure,
profile your app. I will start with Android profiler. We
released this with Android Studio 3.0. I hope you guys are
mostly using this now and this is a replacement for the Android
debug monitor. This is the unified time model that you use
when working with performance. Let’s look at the other areas in
it. You have the CPU, memory, and your network – three things
actually shown in terms of a timeline. On top of it, you also
have input events. This allows you to kind of track how your
app was actually performing in these three respects with
respect to what the user is doing. We also flag things like
system events like the rotation of the screen, and you can see
that icon shows that the screen is getting rotated at this
point, the activity is getting recreated, and you see a small
spike in the CPU at that the point, so you can relate some of
these things in that one timeline. You also have the
activity state here, which allows you to kind of see what
the activity is doing at this point in time: is the activity
stopping? Is it actually running right now? Where is the
CPU spike? Where is the memory spike coming? You can use all
these points of information together as you see the
timeline. Let me jump into the CPU profiler. You might want to
understand, because we have in India, if we look at it, we have
a varying set of devices. We have a lot of low-powered
devices too where the processor is not as fast as the latest and
greatest of devices and people still continue to use that. You
want to be able to understand how your app is performing with
respect to CPU. This is, when you click on the CPU part of the
tab, this is what gets loaded up. Let me walk you through some
of the sections here. You have the threadless, which – the
thread list which is all the threads that we are running in
the timeline that you talked about. You have the CPU activity
which basically has three things that you can very quickly
look at – the green which is how the CPU is used by your
application, the lighter green which shows how CPU is used by
the system, so you can make a coalition of how you’re – and
the number of threads running on the device.
The dotted line shows the number of threads. There are more
threads running right now so my CPU is spiking. The system is
not loaded but my app was actually using a lot of the CPU
– things like that. You also have a thread state. When the
CPU spike, the threads running in the system right now, and you
can look at that correlation and see this is where I might
need to optimise the CPU, maybe this thread, that runs in this
thread where I want to make some changes so that the CPU spike
is kind of ready at that point in time. But if you want to go
more deeper into it one need to do something on the profiling.
This is basically tracking the entry and exit of methods that
your app uses. There are two ways in which you can actually
do method-tracing. One, I mean, the red button starts it off,
but the more interesting thing is the drop-down next to it. The
first method is that you can sample, you can do it in a
sampled manner, which means the system will actually, what it
will do is take it at regular intervals of time and it will
start tracing, cap architecture the invocations and exits of –
capturing the invocations and applications of functions. The other one is called
“instrumented” where every single invocation is tracked.
You might think it is much better, right? We get a more
detailed information. Not exactly. Instrumented has a much
more higher load on the system because it has to work, so, if
you use sample, your status isn’t loaded as much as
depending on the use case which you’re using and trying to solve
here, take one of these two. Let me load up one the profiles
that you have, even though I’m showing static images here,
these are continuous timelines that you can take, a timeline
I’ve captured and loaded the data in here. You see it bottom,
you have the call chart that there and the timeline and the
threads. Let’s go deeper. I’m selecting a certain section of
the timeline where I see a CPU spike and then we want to
analyse that a little better here. Now, the recording – the
selected recording is the area in the blue. You have a call
chart for at that area displayed and visualised here. Let’s go
deeper into that. Now, if you look at the call chart, there
are a couple of things that you quickly want to understand. The
orange basically represents all the function calls which are
still-made, so what your activity classes will do,
fragments will do. The system calls are all orange ones. Your
code is basically the green part in there, and you can also see
some blue colour that’s basically system code. So let me
take one example here. If you look at that long, green bar,
which is basically an activity-create, it is a point
where your system is rotating, you have the image detail
activity getting loaded, and you see that the on Create is long.
You see the activity is only one third of the time. The rest is
done by an initialisation process and then an
image-fetcher. Armed with this information, you can very
quickly understand that, hey, in this case, my activity could
have loaded a lot faster, if I can move some of this score that
is written by my application which is in my application to
another later point of time, or somewhere else. Your activity
would load. There is a point where the user will see a slow-loading activity or and you
have an option to improve it by maybe rearranging your code or
rearranging the flow of data that you have in your
application. Instances like this are very easy to pick up when
you start analysing this. This is not the only way you analyse.
If you look at it, you can also see a top-down approach, where
it started at the top thread of your application, and you can
see down to the final invocation. This will be useful
when you want to trace apart and see all the functions that are
being called and how they are getting – what effect they have
on the CPU. The other one is a bottom aprohibitive. Now that
you’ve defined, this function is a place where I’m going to
optimise. It will affect what is calling this one. You take the
function and you can do a bottom approach. This is the screen
I’m talking about right now. That helps. The second one we
are going to talk about is memory.
This again for us is very critical. For India, we have so
many variety of devices and the amount of RAM on these devices
from low to high is such a big variance, you would see
out-of-memory exceptions is more common in some of the
applications we built and deploy in our region here. To help
with that, you have the memory profiler which looks at the
colour coding the memory, the Java code, the stack, things
differently, so you understand how they’re growing at each
point in time. You have things like GC event being shown here
which lets you know that at this time, you have the force to
Force GY to see behaviour at several points in time, can I
Force GC to do something? Will that affect the
application? Let’s take the segment of it and analyse it
more deeply. When you select the select time
segment, all the allegation s are there. This is per class.
For each class or a type is actually shown on how much of
allocation, and the allocations happened over that period of
time is clearly shown here. You can also arrange it by
callstack. This is how you would want it. When there’s an area
you want to look in, callstack and other options here will
really help you dig deeper into this. When you arrange by
callstack, you will see, at this function, my app is locating
these variables of this much memory. Now, you found out that
this function I’m taking the – the app is slowing down or there
are a lot of events happening. You as I want to find out why.
You see the allocation count, and you can fine-tune it to the
last function that that is been made and you can look at how
much of this is allocating. There is another option we have
which is very interesting which is called Elec. We arrange by package. In
our apps, we have a lot of third-party libraries being
added so we would reuse some of the code. You might want to work
on certain parts of your code and optimise it as the best
leaving aside that you don’t actually have access for, you’re
using the library in that case. the, you could do is select by
package and avoid all the packages you’re not interested
in focusing on, and select the ones that you want to work with,
thereby improving those pack Januarys and the code that
you’ve actually written. When you select one of the allocated
allocations, what you will see is all the instances of that in
that timeline that got allocated with some additional
parameters. This gives you a fairs — this
gives you a fair sense of how many times is it getting called?
Should I make a static variable of this? You can start thinking
of all these option was that. Now, clicking on one of those
instances, you also get the allocation stack. Like a
lot of places, you want to dig deep into so you can do a heap
dump and load is into the pro fairly.
This is how you would see a heat dump getting loaded. The same
way that you have a split that I talked about, but when you get
the instance properties, you can get to the reference instance.
If you have after memory leak, this is a faster way to try to
figure out how you want to drill down to the point and see what
references are holding to this one. It is much easier to work
with it. The last profiler I want to talk about today is the
network profiler. Network as we know, right now would be we have
a lot of bandwidth. People tend to optimise a lot for network
usage, and that is because one of the bigger concerns is the
battery life of the device. Battery life can be affected
drastically by overusing the network because the radio
states, as most of you know, when you use the network radio,
it goes to the high-powered state. As you stop using it, it
comes to the low-power idle state but still consumes power
and turns off after some time. If you continually use it to
transfer or receive data, what happens is the device will keep
the network hardware at the high-power state, thereby
consuming more battery. You don’t want an experience where
your users love your app but the app drains the battery so
quickly, so it allows you to uninstall. The network provider
will let you see how many data our transferring at the same
time looking at the radio states. That blue line is where
the network different states of the radio is actually shown,
where you’ve seen it is high power, low power, or if there is
a person using Wi-Fi. Now, you have a data transfer
graph also. You’ve got how much data was transferred, received.
So the blue line, the blue graph indicates how much transfer
data is happening, and the orange line – not the dotted one
– the orange solid line is how much data has been received. The
dotted line is the number of connections that you have in the
network connections. Sometimes, you will see that you’re
keeping unnecessary network connections and keeping a lot of
resources. These graphs visualise areas of
concern for you and allow you to drill down deeper to fix the
problem. Like all the other graphs, you can take time slides
of this, look deeper into it. When you take a time slide, every HTTP/S request is shown.
They will tell what you type of network call it is, what you
have requested for, what is the size of that request, so that
you can go in, and how much time. So you can then literally
go in and see, “I want to optimise this one network call.”
If you select one of those networks, you actually would get
the response, if it is an image, that will be rendered. It
allows you to find things like the cache, what is really right.
Am I downloading the same image too many times? Am I in the
right resolution for the image from the server? And you can
goes look at the headers to think about – issues, how much
time has been shared by the server, so that the cache will
fold this for longer – things like that.
You can look at the call stack and see the different parameters
here. Now, with audio, and destroyed Oreo, you can profile
any debuggable app without any changes to it. If you’re
targeting Nougat or below, you would have enable advanced
profiling and build AS3.0. In order to enable this, if you’re
running the app, that system will actually prompt you with a
nice dialogue, so you have to say, okay, and the advanced
profiling really automatically enabled for you. The second one
I want to talk about is APK debugging. Once you have a
problem and you want to look more into what is happening,
right? Now, to profile and debug, you might be like
developing a game using a completely different thing other
than Android Studio, and you want an option to analyse and
profile your app using Android Studio. In this case, we add the
new section where you can actually build your APK, use
Android Studio to profile and find all the information we
talked about earlier here. So, when you do that, what happens
is it creates a dummy project, loads your APK and opens it up
in the default in the APK analyser. You can then go ahead
and attach your sources and libraries so dependencies and
other things can be fulfilled, and you can start profiling your
code better. With the APK analyser, there is an extra I
want to talk about. How many of you use the APK analyser?
I’m glad. Those who haven’t raised your hand, go back and
look at that tool – it is really useful stuff. You can invoke the, APK analyser. Clicking on
“analyse” will load app screen like this where your app is
actually shown where it is different parameters in the APK,
the different sections, things like classes, the DEK files, the
resources, the manifest, how much space they’re taking with
respect to the total size of the application. By the way, there
are two sizes being displayed here – the raw sizes, the APK
case, size as file, and the second one is the download size.
The download size is basically how much download, how much data
is going to be downloaded for this app when you’re actually
installing it from Play For? The store can do additional
compression on that so these won’t match with each other.
Play Store will send it smaller and tend send it down the line,
the user will – the AP size is the size on the disk. In this case, we’ve loaded the dec file.
We visualise it by showing you the size, the file this is
taking in your APK. A lot of times, it is not being classed
but the resources that you see as the offender when did comes
to the size increase, you may have a source that is very large
and occupying a lot of space. This allows to you drill down to
the area in your app which was taking more space and reducing
the APK size drastically. I want also to call out this can be
run from command line now. You can add to the systems and make
this as an output which every time the CI build runs, the
automated run, you can actually run this and get the data, and
do comparisons. Write a little bit of shell script. You can
take comparisons against your previous one, or you can get
more data about what is the increase, where is the area this
this building needs to focus a little more with respect to app
size. The last section that I want to call out is Android
Vitals which is actually part of the Play store. Here, what
we’ve done is we’ve actually visualised a view of the
parameters that are in the real world. Your app is out there,
being installed and used by a lot of users. If you look at
these users, how is this app behaving on their devices? That
data is collected and given to you in the Play Console. You can
go into the Android Vitals page, and you will see things
like ANR rate, crash rate in the real world. We should like some of
these parameters in our application not responding. Your
crash rates: how many crashes are happening in the real world?
Slow rendering. This is something that is not easy to
kind of find out otherwise. On how many devices is slow
rendering happening? Frozen frames? This is it really good
information for to you start saying, “I have this one problem
in the real world, let me start going back and being reactive
about things. Let me profile again, find areas that are a
problem and fix this. This must be affecting my users a lot.”
There are more things like stuck wake locks. You might not
recognise there might be coding errors or things that are in
your testing that have stuck wakelocks which will make sure
that the CPU doesn’t shut down or the screen doesn’t shut down
which in the end increases the battery usage of your app, or
excessive wake-ups. In the reeled world, waking you mean
too many times because of personal signals, or too many
alarms on it. All of these things could be waking up your
app too many times. We also have a section in
developer.Android.come. How you want to respond and work with
them. That’s it for me from today. I hope some of these
things are useful for you, and you have learned something new
here. Go back and profile your apps, and, if you make
substantial improvements, let us know your story so we can come
in and help you with more stuff. Thank you. [Applause]. BEN: Hello, good morning. Thanks
for being here on a Saturday to hear more about PWA, AMP and other exciting
things. I’m Ben Morss, a developer advocate at Google,
helping to make the web easy for programmers, beautiful for
users, and such. Before I was at Google, I used to make websites
for the New York Times, AOL, and before that, a musician for
some years. If I could quit coding tomorrow and sing like
they did last night, I would quit it today! But, the shadow
reader within example of the PWA and AMP pattern of combining
these things together of combining these things together
in a single application. The first question is why? Why do
this? Why learn new stuff? Well, I’m thinking about what I
call sometimes the Web App Dilemma. 19 years ago back in
the previous millennium, for front-end developers, life was
pretty simple: any kind of complexity, almost all the code,
was on the back end. There was Perl, PHP, something else. The
front end, HTML, CSS, and maybe adjust little bit of this cute
small cute little tasks like button rollovers, the mouse went
over the button, it would change colour. That was kind of
changed since then, and it’s this immense complicated
universe of frameworks and technologies and ideas, and
these are wonderful, don’t get me wrong. It ends up being hard
for you and the user. It can be hard for you because you’re
learning new stuff all the time. You’re learning Angular, React,
all the latest stuff that is going on, and, if you want to
have a fast application, it takes a lot of time working on
performance of tweaking things again and again so it isn’t too
framework, takes a long time to load, and then to parse, and
execute on mobile devices. This was the problem. If you’re on
your nice computer in your office, and you’ve got a fast
in the world might be fine but on mobile devices, especially in
countries where 3G is the norm, it takes a long time to load
things up and parse things. Making the user deal with all
that is unfair to the user. You pay a price for that. You can
see over here that, over the years, the size of the average
web page has increased quite a bit. This is increasing more and
more. What happens is users are waiting for something to happen
on their own. They don’t know all your wonderful frameworks in
are just saying, “Why isn’t this thing loading?” They’re liable
to bounce off your site. So again, the web is hard to write
for in some ways. How about making a native app? Apps are
easy and fun, but the problem is the average user will probably
never download your app. It turns out that at least in the
US, and it is probably similar here, 80 per cent of users’ time
is spent in the top three apps, and, out of the top ten apps in
the US last year, eight of those ten were owned by
either Google or Facebook. I apologise for the large
well-known apps like Gmail, Snapchat, and so on, and your
app isn’t as likely to get downloaded. For peak users, it
is good, but for new users, it is hard to say download my app.
Can you get the average user down loaded in the US per month
last year? Your average number of apps, the average user
downloaded last year per month in the US? It was 0. It was
actually a lit less than a half. The point is that apps are very,
very useful, but making your own app might never get used by
anybody. What do we do? The old days weren’t so good. Websites
were very simple and users now demand more, especially on their
mobile phones. They want an immersive really nice clear
experience on their phones that the website could provide 19
years ago. What do we do? A website that loads quickly, that
is responsive, that is beautiful, infunive, and so on.
But what do we do? Is there some easier way. Is there an
answer to this question? No. Okay, thanks. Good night! Wait,
hold on! I’m just kidding. We also can try of Progressive Web
Apps and accelerated mobile pages, PWA+AMP. If you’ve been
here for the last day or so, you’ve probably heard talks
about these things. Who knows what a Progressive Web App is?
How about accelerated mobile pages? All right. So we could
discuss this a little bit further what these things are.
Progressive Web Apps, as you may have married or not, are an
idea about making a mobile site more of an app-line experience – fast, integrated, reliable,
engaging. The idea is to make a follow site as good as an app.
This is possible these days with modern web technologies and
modern browsers. You want to have things load fast initially,
to load fast throughout the app, to be an-like – immersive
to be in the whole screen of the phone, if they want it, to have offline content, push
notifications. Awful these things are possible with mobile
— all of these things are possible with the mobile
website. AMP, or mobile pages somebody a way to make sites
very fast, and you may have heard of it before because it
became used by publishers for simple static pages, but AMP has
evolved since its inception two years ago and it’s not just for
publishers any more, it can be used not just for simple web
pages or the way to get into Google’s AMP Carousel on
Google.com, but it is a nice new pattern to make sites faster
especially, and simpler, and easier for users and for you, the developers alike. A little
longer allowed. Can a couple of exceptions here and there.
Expressions still allowed. All the frameworks you knew before
can’t be used any more except for on the server side. This may
slow things down. It forces you to have a performant fast site.
On the other hand, with AMP, you get a standard version of HTML. With web components, you get new
more than 50k. A little more about AMP easier here. This
slide is next! This sounds hard to do to combine these things
together. We will discuss a simple example of this and call
is it the shadow reader, which is so simple that it requires no
I think I skipped some slides somehow. That’s weird. I lost
some slides. Where are they?
They’re later in the presentation. Excellent. Let’s
say that you’re working at a major newspaper of some sort,
and you’ve got a nice website. Is isn’t that nice a website.
Most newspapers, you’ve got AMP on your pages. And the boss
says, “You know what? We need a better new reader app over here
for our site. We want to take our existing AMP pages and mush
those into some kind of nicer, better progressive web app
experience.” How is that possible? These things are be
combined. PWA and AMP are very different ideas. They are
compatible, and we will show you how to do it. Why do this first
of all. Let’s consider AMP in a few wise that I’ve not
considered. First of all, think about AMP as web
components, as a source of rich data, and, third, as portable,
embedable units of content, thanks to a thing called Shadow
DOM. First, AMP as web components. It began as this way
to make simple web pages for publishers but has involved into
this powerful web components library. Here’s an example of
AMP. This actually will make an image Carousel where images
slide around the screen, they slide around, slide by
library, requires some work make it performant, the
lazy-load images off screen, AMP does all those things for you.
Carousel, specify size, because all elements have their sides
specified in advance. As things load in, things on the page
don’t jump around. You used the tags in there which provide
performant versions of images, and then that’s it. That’s the
Carousel. That’s the whole thing. Another it will of an AMP
component, a YouTube embed, for example. You specify the width
and the height, and the idea of the video, and there it is. No
This component involves the complicated streaming technology
YouTube users – you can’t usually access but it is there
as part of this component. There are a lot of components in AMP: analytics, adds, if you were at
my talk yesterday, AMP binned And ambassador List now provide
dynamic content on the page. I think a lot of pages on the
web could be transcoded to ambassador and wouldn’t lose a
whole lot but would be faster. Certain things are harder to do
web pages can be done with dev easily. Next, AMP as rich data.
Consider this: AMP instead of web components
and HTML, it can be used to store data. Say about articles,
or post, things that involve text, images, other things. It
is like a version of markdown on steroids, because it also
combines the power of AMP components into the data. For
example, let’s say we had JSON over here about a random band,
and there’s content over there. The same thing on AMP may things
like an image of a gate tar. It might contain, H2, some kind of
mark-up information, even abad. data. Again, ambassador as a
source of rich data. Number three: AMP has
portable, and he had beddable content units. You can tick the
AMP pages and there are already units of content with this rich
data and mark-up. You can plug these things into
other pages. Cons again, you can take an entire piece of content
which is rich data with mark-up edge bedded into it, even
things like ads into it, and plug it in to other pages. Which means you
can take existing AMP pages as most publishes have and
repurpose those things in other web pages, and now you’re
thinking, “Wait a second. That’s what I was looking for for my
web app over here, for my boss.”
That’s taking existing AMP pages and – we can do that, but how? We use the magic of Shadow DOM.
I love that term because it sounds mysterious because it
contains the world “shadow”. All a has the I don’t DOM is
embedded in, and encapsulated to, and attached to, an HTML
element. It is like what you use for functions, good computer
signs practice, but the same idea applied to the DOM instead.
You can see over here, you can imagine these AMPs attached to
DOM elements in a PWA. This allows a valid mobile
page to exist inside a different page.
The thing is, as you may know, AMP has to pass validation. If
Ambassador isn’t – Google, for example, will not actually
accept AMP pages that don’t past validation. That includes no
invalid AMP page stuck inside something else. The way I’ve
seen this is a whole sub section of a page can be made into
ambassador and the rest of it doesn’t have to be. If I used AMP before,
you recognise there’s an runtime that loads up. This is shadow.
This is the shadow AMP run time. How this works is, when this
actually loads, it creates a global object called AMP and at
the bottom, we have a attached shadow dock method that will use
the AMP global object to Test match to your chosen container,
an entire ambassador document. In the also we use an
asynchronous pattern here because we want things to be
performant. We load the script async, so what happens if it is
not loaded yet, how do you use it? There is a trick in the
middle over here. We define this AMP-ready function, and it
takes the callback and attaches that callback to an array. That
array is window.amp, the paper being the global object. When it
actually loads, it knows the array exists and sticks it back
into the object. Then, when it actually loads up, all your
callbacks and the array are executed in turn. So thus you
have asynchronous loading over here allowing you to attach your
document, chosen container using shadow AMP. So we actually
made this thing called the shadow reader that illustrates
this pattern, and I want to show you how it actually works over
here. If I can go over to the demo.
There it is. That was like magic. This is a thing that we
built, it uses existing articles from the Guardian in London,
and you can see over here, this is a nice-looking page. This is
a PWA. These things over here are all links to articles. You
can here on the bottom if you look at the source code, if you
can see it – you probably can’t see it – it’s not HTML, it is
not AMP. But clicking on any article over here, very, very
quickly – you see how fast that was? The
Wi-Fi isn’t to good today, either – and it pulls up the
next article. The article is an AMP article. We inspect over
here that we can see this way down there. Let’s see if I can
find it over here. You will see he all this AMP stuff in there –
AMP pixel, AMP analytics – and there’s an article with the
shadow root. That’s where the AMP is attached, down there. We
can go back to the other version. Things animate nicely,
these cards animate beautifully. That’s not working! We checked for
that. Too many errors. We will do that again. Pretty quick,
right? The news, here, historic in the US. Michael Flynn is
pleading guilty. Interesting things happening in
my home country. The shadow reader works. The offline cache,
things appearing offline. It is a full-fledged PWA. Let’s go
back to the slides over here. I will show you more how this was built. Again, as I showed
before, the AMP exists inside a PWA. Here’s how we do it. First,
we pull in that AMP data from our RSS feed. Next, we take the
AMP HTML and inject it into the DOM. Finally, the progressive
web app makes it look and feel good for the user. The first
step, pulling until the AMP content, is pretty
straightforward. We use an external XmlHttpRequest. We
don’t use fetch because it turns route the response HTML
contains a parsed DOM. This is pretty convenient over here. We
then remove parts of the page that aren’t needed. So the AMP
page for the Guardian has things like the header, a menu over
there. We don’t want those things
showing up in sight of your way, a second header, a second menu. That’s going to be strange. No
the DOM. You create a shadow root, you
attach that there to your chosen container elements, and then
render, and there it is. So, first, we actually loaded the
AMP from RSS. Second, we inject that into the DOM. Finally, the
PWA does all the interesting stuff. Let’s discuss the PWA
part of it now. First of all, you get a fast initial load
using an app shell, so it loads quickly when it first loads up,
and, then, as the mover moves around the app, things remain
fast. Then as the user uses the app, it’s an immersive app-like
experienceexperience. We use a manifest to let the user add to
the home screen like it was an app and they have the display
taken over entirely by the website like it was an app, and
then offline content is also possible. All through the magic
of PWA. We will go over these things each in turn. First of
all, the app shell, the idea of the UA, is to have the basic
shell of the app load first, so your logo, the title of your
company, in this case, the newspaper, different colour,
different stuff that indicates to the user that you’re there in
the app. As more things load, they can wait for those things.
Here is a simple text-based way of showing things are loading –
a bunch of tildes in a pattern that kind of represents an
article. You could have a graphic instead, but this is a
simple app showing how fast you can do these things with minimal
stuff usually to make this happen. I noticed a couple of
years ago, I was using Facebook, and I was looking at my news
feed. I thought Facebook is much faster all of a sudden.
Things are half loaded. I I clicked on this thing here. It
turned out, it wasn’t, they were tricking me. They were using
these grey and boxes and lines, image that will be replaced by
the content. I thought it already was half-loaded. Users
like this kind of thing. Here’s an example of that using just tides to show the articles are
on their way. Next, things load quickly anywhere you are on the
app. This is easy because we’re using content. Consent is fast,
written for speed. The should a DOM lets us hit the – act, acts
like an – app-like immersive experience. As the content
changes, the HTML gets injected into the DOM automatically. If
you have animations, and things like that, and it looks good.
This was not very hard to do. By the way, all this code is
available online on GitHub. Check out AMP.cards. It is there
to look at as an example. Number four, these kind of fancy
app features, adding to home screen, are very easy to do. The
manifest Json file lets you choose colours and icons. It
will – the display is similar, you can specify what kind of
display you want, which means standalone will take over the
entire screen like an app. Also a simple thing to do. Then, finally aboffline
content. This can be harder sometimes with the PWA because
you have to use service worker and define your own caching.
This can be a little bit challenging but in this case, it
work box which has also been discussed at this event.
Wonderfulwonderful is a fast way to have caching happen and
other PWA things happen. Over here, we cash some common
routes, the ones that are executed, they will be cacheded
in the browser.
And in the again that these are examples of actual code. None of
these things use jQuery. You could use jQuery. To show that
library. You’ve got your app. It works. The boss is happy. You
did it. Congratulations, you kept your job! So you made this
beautiful PWA but is this AMP? had to do for Google, because
you can get Google ambassadors, the search Carousel. Is that
okay? Can I do that? I mean, yes, you can. Does it actually
matter? If the experience is good for the user, is that the
most important thing? And, in this case, if what you’re paying
is to get served from the app cache on Google, it still
happens because you’ve got your original news articles which all
have passed validation, so such results are from the an cache.
The PWS is something else, a nice, fast way to read the news.
Here’s an example that Dame out a week ago, La Repubblica did
the same thing. It used the same cards the shadow reader
actually used. There’s your home page there. Click on an
article. It is in Italian. Over there on the right, I guess,
yes, you’re – it’s my left. Over there on that
side, closer to that side where I’m pointing awkwardly is the
actual article. If you look at the actual site and go to your
laptops and check it out, you will see in there the shadow
route and the ambassador stuff I showed you a minute ago. This
pattern can go beyond publishing. Let’s say you’re
making a sight which is a regeneration site with a form of
multiple pages – say, five different pages. Often, these
are hard for users to do. What if you use the app forms which
have automatic validation which are fast to load, which are
quick, and then use a PWA around those form pages. You can swipe
between them, smooth transitions. Different vertical, or e-commerce, for example.
This pattern over here might be browsing a series of product
pages, and, as you change from product page to product page, it
makes smooth transitions, allows offline comment. You can
use those product pages that already exist and stick those
things into a nice immersive PWA.
Another example is travel. Let’s say you’re looking for hotel
rooms, or some shared rooming service, or something like that
with multiple listings, they can be AMP pages and still those
things into the PWA. The upshot is all of this makes you money.
You save money because it’s easier to write these things.
You can reuse existing pages. They’re simple apps. You can
make money because users like the experience which is fast and
engaging, and you have more conversions. So you’re making
money on both sides. It’s not all about money, though. For me,
as a former musician, and still occasional musician, it is
about simplicity and beauty. I think things that are simple and
beautiful are good things, because I mean, as a programmer,
you learn to make things that are clean, that are clear, that
aren’t cluttered with extra code that doesn’t do anything, stuff
that makes sense to people. There is simplicity in that
beauty that makes code good, not only at setcally, but more
maintainable, because it is easier to know what is going on.
If you have clear code, and simpler code, it looks better.
People will curse your name for putting in the complicated
stuff. Another reason to consider AMP is the simplicity
and beauty of things that are simple and beautiful. If you
have questions, you can find me after this over at the Sandbox,
over at the office hours, and also, asking a lot where they
can find out about AMP, so I have this slide this morning.
These three sites over here will get you started with AMP:
ampproject.org helps you understand the components and
how everything works. Ampstart.com are pre-signed with
templates. Ampbyexample.com. A great way to try it out for
yourself. Thank you very much. And have a great day. We’ll be back shortly. We’ll be back shortly.
We’ll be back shortly. We’ll be back shortly. We’ll be back shortly.
We’ll be back shortly. We’ll be back shortly. We’ll be back shortly. We’ll be back shortly. Nasir Khan: Essentially in that
case, the user is not interacting with the app so I
almost categorize this as a background use case.
Let’s step back and look at these areas.
That’s the process I’m going to talk about a little bit here.
So, there is a foreground state of a process where — process to
be in foreground. This could also be the case when
— when a service is actually on start for a very brief time.
It could also be a visible state of the process where the app is
running, it’s visible to the user but it’s not actively doing
anything. It could also be a service
process, but your service is running in the background and
that’s called a service process. A cached process is the most
commonly-seen process state by the system.
In a device, you have dozens of hundreds of apps running, it is
in this cached state. When I talk about background
restrictions, these two states which are known as the
foreground state when the system sees the process.
I want to talk about the cached state.
Most will be in the cached state at any given time.
But let’s just move into a content of Thread.
In Android, there a various ways of doing Threads.
There are four main types of Threads in Android.
Of course, Android developers with the UI Thread or the
MainThread. This is the threat where your
app runs and the callbacks are from.
And this is where the UI updates take place, the only Threads
that can do an update on the app.
They are seen in the case when you have a bounds service.
Some of the app calls into that bound service or you have some
other process calls into that ContentProvider.
Binder Thread are only used when you are having an end user
communication. Then there are platform managed
Threads. They are regular threads and
there are constructs that enable you to create those threads.
Likes like IntentService. They are regular threads but
they are limited by those API service.
There are Threads which you can create your own threadpool.
You can create it any way you want.
At any given time, your app runs on the worker threads.
As any Android developer would know, you want to do something
slightly long-term with those Threads.
There are different concurrency in Asyncs.
It is well-known AsyncTask, which is a way of moving the
processing from Worker Thread. There are issues.
I won’t go into much details with them.
The issues are the way people use it leads into identity
leaks. There’s also a Looper/Handler
mechanism. This is one of my favorite ways
of doing threading. It’s also central to Android.
I don’t see this being using very often in apps.
If I were to give you a 30-second update, a Looper is a
thread with a MessageQueue. A Handler is a piece of code
that is interacting with the loop.
On the Handler you send message to that MessageQueue or process
message from that MessageQueue. You can create a Worker Thread
that creates its own MessageQueue.
Android app is nothing but a Looper.
It’s a Looper, which is getting messages on it’s UI MainThread.
You could also use your own executive service if you’re
using Java. There are community libraries
that provide Worker Threads for networking.
If you are using Java, there are creative ways to create it.
You can observe something on the Worker Thread and it goes back
to the MainThread. So, there are various ways of
doing multi-threading and there’s much more.
I’m just covering some of them. But what do these constructs
have to do with the space I talked about?
There are process change, I talked about them.
There are component states. Your activity goes to a
transition and your fragments and your views and so forth,
nothing. You can create as many Threads
as you want but the process states and the component state
is only governed by what the user is doing and how the system
is viewing your app. You can have as many threads but
the app can go into cached state.
So, each of these multi-threading approaches have
similar issues to deal with. Some of them provide
abstractions to help with those, but as some like AsyncTask make
it easy to make mistakes. Activities, fragments and views
are destroyed and re-created. A new one is created.
If you’re doing multi-threading you have to make sure you are
aware of that. If you’re running a background
task, you may want to stop that task when it is not needed so
your main activity goes away or your main processing stops but
you don’t want to continue to burn battery for no reason.
At the same time, if you started from processing, you may want
to preserve that work is somehow.
You don’t want to completely stop that.
You want to do all of that. The question is, how do we
actually deal with that? But before we doing that, what
are we really trying to do with this?
The most important thing that Android developers deal with is
get the MainThread and UI Thread and do something on the Worker
Thread. You have to post it back to the
worker, to the MainThread. That’s — you’ll see and that’s
the reason you look into the multi-threading aspects.
Let’s take a simple sample. It’s a really simple sample.
You have an app in which it is getting the latest value of a
stock from the network server. So, in this case, you would want
to start a background thread because you want to refresh
something from the network. You want to get the stock price.
You want to update when the UI is visible.
You don’t want to do it when the fragment is not there or
something like that. You want configuration in
general. You don’t want to crash or
destroy something or leak something.
You also want to preserve the work.
So assume you’ve downloaded the latest value of the stock and
you’re going to do calculations on that.
You have done that in the Thread and you want to throw it away
even when the app goes away, at least for a few seconds.
You don’t want to run forever. You don’t want to run
unnecessarily. You want to stop, also.
Even in the simple sample, you have all these simple
requirements and there are various ways of doing it and
this is where people make mistake.
I want to introduce a best practice using Android check
components. There was a talk about training
on the components. I’m going to give you a
high-level overview. If you missed the training
yesterday, I recommend you go back and listen to that.
I’m talking about two components here for architecture
components, ViewModel and LiveData.
The LiveData is a cycle of component where it’s tied to a
widget or a view element on the activity and it knows when to
update that. It automatically updates.
That’s a very, very high level. As you can see the scope of the view — viewmodel.
The ViewModel will survive that, that lifecycle until the user
clears it up or system kills it. In any other case, the ViewModel
preserves the state of the data, alongside the lifecycle of
the activity. It knows when is a good time to
update the UI. So with that high-level
background, let’s look at coding.
Yesterday, if you went to the talk, we talked about different
Material Design Motion Specs of architecture.
One of them was a set of classes which deal with getting the
data either from the network or from the database and making it
for you in one place. This is a naive place of that.
I’m calling it stock manager and I’m creating a Worker Thread.
working thread for you. Essentially what I’m doing in
the Handler message is there’s a list that’s attached to it and
I’m not calling that every time there’s a price change.
In price, plus, plus, you can go to the network and get the
value because this is running in a background Thread.
There’s this request update, which a client will call in and
they attach their listener to this.
There is this update where they remove their listener and I’m
using Handler, which is resulting in sending the message
to myself where I’m checking if there are no more clients.
If there are no more clients, I’m stopping the work.
It’s a straightforward way of cleaning yourself.
This class has nothing to do with Android.
This is pretty standalone, but you can test in isolation.
So, with this, I’m able to start a back Thread and periodically
get the stock price update. A ViewModel, as I said, is a
class that is associated with the activity and all I’m doing
here, with ViewModel is using it as a holder for my LiveData.
The LiveData is actually just a piece of data that is holding on
to the stock price. The LiveData is a little bit
more interesting. First of all, here I’m creating
my listener, remember what I created in my stock manager?
I’m creating it here. When the callback is called, I’m
doing a post value, it is LiveDataused from the
background thread for the LiveData object.
We have this on and on active callback.
It is called when there are active observers to this
LiveData. As long as — as soon as there
is an active observer, I’m requesting the updates to my
stock manager. An on active is called and I’m
removing a listener from it. In the stock manager class, I’m
stopping the work there. Can you guys hear my fine?
There’s some background noise. No?
You can hear me, okay. I can’t hear myself from here.
If you look at the Activity class, there is — I’ll I’m
doing here is getting ViewModel. This ViewModel is associated
with the Activity in general. And this observer — with this
observer, I’m obsering the LiveData.
This is where I get the updates. That’s all there is to it.
There is a repository, LiveData and my main activity class.
All right. So as you can see with few lines
of code, I’m able to do this. I’ll share the slide deck and
talk with you. You can use any Thread mechanism
you can want but be aware of the UI and lifecycle process.
What about IntentService?
It is a very nice way of doing background work.
However, IntentService is just a starter
service and they don’t run freely anymore.
You cannot start a service if you’re not in foreground.
Whether you’re a handling a high-priority message or
handling an SMS, you are in a temporarily listener service.
In most of the cases, you will not be able to start a
background service from background.
If you remember, I showed you the states here.
Here is a more comprehensive view for it.
At the very high level, you can see that there are only two
states where the apps are considered to be in foreground.
There is an Activity starting or a service which is running with
a start foreground. Other than these two, the app is
mostly in the background state. Unless the user is seeing
something, you are in background and you cannot start a service.
If you’re wondering, well, how do I know what my state is in,
you can always run a dumpsys command.
I was running a background with a foreground activity.
My first line, my state was foreground and the process state
was Activity and the second one was the activity of the
background and the service was still running and the last one
was stopped. What are we really trying to do
with background services in general?
A good example for that is that I was running my app and I was
running something in the background and the app goes to
the background but I still want to finish what I was doing.
I didn’t want to just stop it right there, for a little bit
more time. So, with that, I wanted to use a
best practice for using a Java intent service.
This is a new service in 6.1. If you use Java intent service,
it runs as a normal IntentService.
It reacts exactly the same. If it is running on later
devices, a JobScheduler is there.
It will start processing almost as if — as if it is an
IntentService. So, here are the simple things,
if you’re familiar with intent service, you can convert a job
intent service. Instead of calling start
service, use — instead of having on Handler, on handle.
The Worker Thread is automatically created for you.
It is more robust and backward compatible, as well.
So, for any background work that you want to ensure finishes,
after your app is no longer visible, use Java intent
service. So, what if you really want to
use background service for something different?
Like, for example, single page apping of data.
Your app is no longer there, but you still want to single page
app data periodically or after some time, that’s a valid use
case. This was added in Marshmallow
and it’s been progressively increasing in complexity where
the — the screen is off or not plugged in, you will not get
internet access. I’m sure aware of it.
I’m not going to do much detail. But be aware of that.
Even though you want to run something in the background,
chances are you won’t be able to do what you want because of
these restrictions. Here, I wanted to use
JobScheduler. I’m sure you’re already familiar
with JobScheduler, but for any background work you were using a
start service before, I would strongly recommend to use a Java
scheduler now. You will get complete network
access and everything else you wanted to do, in a nice and
efficient way, using JobScheduler.
You can also specify conditions when you want your services to
run. You can say, what should be the
policy? With all, we added new
constraints where you can say, run only when there’s a
background available or when there’s enough storage
available. You can have additional
constraints to run your job. So for most longer-running
background work, use JobScheduler.
Foreground service can also be used for a user noticeable
longer-longer task which does not need user interaction.
An example of this would be a media app.
You want to play music or navigation where you want it to
be running, basically, even with the screen is off, use a
foreground service. You can promote a background
service to a foreground by calling new method defined in
context. It creates a foreground service
where you are in background, so you can do that in audio and
above. The most important take-away
from this is do not surprise the user.
You should not be using foreground service when the user
is not expecting because you’ll need to post a notification to
the user. They will need to relate, yes, I
need this. Let it run.
I’ll just quickly skip over a few things here.
Adapters is another Android construct.
I’m just — I’m talking about a very high level because this is
sort of a legacy mechanism and it’s cumbersome to set up and
it’s only useful if you’re using ContentProvider.
ContentProvider, again, really should only be using when you
want to expose your content to another app on the device.
If you don’t have a use case to exposing your content, you
should not be using ContentProviders.
In sort of that, you should use a combination of JobScheduler,
Room or a similar DROOEB and your single page app logic.
That is what I would recommend. Another important construct for
background is alarms. Use them sparingly.
You can use to send a PendingIntent to your app and it
can be with in exact or exact times.
Remember that your app may actually be in background.
So, if you were starting a service before, you will not be
able to do that anymore. You can use BroadcastReceiver
and post a notification to the user.
If the user interacts with the MOET, you can definitely start a
service or you can do something else, like with the
notification, you can take them to an Activity.
Or you can start a foreground service if that makes sense for
your use case. You can schedule a job with
JobScheduler with the alarm. And you may even use a
setAlarmClock functionality. Like I said, for all of these
cases, you have to be really careful to use AlarmManager when
you want to do something at a specific time.
For all other cases, I would say, use JobScheduler again.
This is a very useful construct and you’ll be able to
do what you were do go with Service or alarms.
Really high-level, I’m not going to go into too much detail of
this chart. This tells what your device is
doing in standby. You can see here, there is no
network. In any of these states, you do
not get any network even though your alarms may fire.
Even though your app is called back but you don’t have a
network, so there is no useful use when you actually in the Doze Light mode.
I’ll wrap this talk up with talking about Firebase cloud
messaging. And both of them can withstand
normal and high priority. A normal priority is a message
is dispatched. A high priority can interrupt
those. However, you should only use a
high priority when it is absolutely critical to notify,
like a new email has arrived or a new text message has arrived.
Let’s talk about this real quick.
Do not use high PRAERL scm, I already said that once.
For any data updates — and this is very important.
For any update, even if it’s a single page app message, try to
use the payload of the message itself.
It is battery efficient. Try to send the payload to your
app. If it is absolutely necessary to
fetch any additional data, either use the Java intent
service or schedule a job. Always notify your user when
you’re using a high priority message.
Today, it is possible not to do that.
Android is moving more and more toward that.
They don’t want to surprise the user and they want to preserve the battery.
Be prepared to handle in it way that you’re actively handling
it. That was a very quick talk.
I covered a lot of various grounds in various areas.
With all of these best practices, I would urge you to
go back and look at your app and see if you can fit in any of
these best practices. If you still have any questions
or concerns or if your use case doesn’t fit, feel free to talk
to me after this session. I’ll be here and I’ll also be in
office hours. Thank you. Pete LePage: All right.
Well, good afternoon, everybody. Thanks for joining name.
My name’s Pete. I’m a developer advocate at
Google. I’m been working on the web as
long as I can remember. I love the web because I think
what’s the most awesome about it is it makes the world
accessibleaccessible. As a developer, I don’t need
anything crazy. For my users, it’s the same
thing. They don’t need to go buy some
crazy brand-new high-end computer or high-end phone, you
can do it with basically whatever device you have.
The web really makes it easy for us all to connect.
So, it’s one of those things that’s a pretty awesome
experience. Today, what I want to talk to
you about is Progressive Web App.
How many people have heard about Progressive Web App so far?
That’s the answer I want. Well, I want to — today, sort
of blow away a myth. And that myth is the idea that a
Progressive Web App means starting from scratch.
Right. You don’t have to start from
scratch to build a good Progressive Web App.
All of us have spent a huge amount of time and effort to
build a really great experience with what we currently have and
to take that experience and throw it away to start all over
again, well, it’s a lot of work. And our boss is probably going
to say, no. Not going to happen.
So, I want to think — I want to walk us through an idea of how
we can take an existing app and turn it into a Progressive Web
App. We need to — to do that, we
need to have some ideas and sort of some tactical things that we
can approach in order to take those steps to get there.
So, in that spirit, let’s set sail on a little journey and
turn our single page app into a Progressive Web App.
Now, this app that I’ve got, it is a single page app, like I
said. It uses the iFixIt API in order
to access repair guides and this initial implementation uses
client-side rendering and it uses React as its rendering
engine. Now, this uses React, but
everything I’m going to talk about today works for whatever
browser — or, not browser, whatever framework you want to
use. You can do this with angler or
polymer, whatever you’re buildling your platform on.
I’m not much of a React expert. I’ve played with it, like, once.
So, that gives you an idea of where I’m coming from.
Now, if you want to see the code that was used for this, as well
as all the difs, you can go to the URL up there and that’ll
give you all the code. You can grab the code, see how
it works and what’s going on. Now, I’m going to let you in on
a a little secret, I’m not going to do any live demos today.
The demos I prerecorded. Sometimes the network goes out
and sometimes the network — the demo gods get really mad at you
so rather than bite the demo gods, I recorded the demos so
that we can see them. And also, I have a hard time
walking and talking and typing, so you don’t want to watch me do
that. This is Chrome, type the URL in,
sure. This looks exactly like I expect
it to look. I’ve got a list of different
products I can repair. I’ve got a bar up top.
Looks as awesome. As developers, we need to test
all the browsers. Let’s do it in fire fox, same
thing. Now, safari, I’m going to do
something a little bit different here.
what the experience looks like for a user who has disabled
Uh-oh! There’s nothing there.
Now, there’s nothing there, not because it’s a problem with
safari, but this is rendered client-side.
So, yeah. This is not very good.
This is one of the things that we’re going to want to fix
during our little journey today. Now — now that we’ve got a feel for what
this does, we’re going to use a Chrome dev stool called auto
panel. It will run tests on our
Progressive Web App or our app and give us suggestions on how
we can fix this. Now, if you’ve played with
Lighthouse before or if you were just in the sandbox area and
you saw Lighthouse, this is the same thing.
All right. So, it goes in.
We’ve moved into to Chrome so you can run it any time you
want. It’s really easy to run, but
it’s also available as a node module.
You can run it as part of your build process.
Maybe if you’re using gulp or grunt or maybe you’ve got some
Travis integration so that when you check something into GitHub,
you can say, hey, how’s this going to work?
Did this pull request break anything?
A couple of projects we have at Google, we do that with our
GitHub repos so we can make sure we didn’t break anything.
Okay. So, we’ll run Lighthouse to get
a baseline and see what’s going on and see what’s working well
and see maybe what doesn’t work well and maybe what we can
approve. Pop open to dev tools.
This is going to go and reload the page a couple of pages.
It’s going to reload and look at the traces and turn the network
off and try to reload the page then.
It’s going to see how long it took for things to become
interactive. It’s going to do all sorts of
different tests. And you’ll see things flash up
and down. Usually this takes about 60
seconds to run. So the other advantage to do
this means we don’t have to wait for the 60 seconds.
We can just sort of skip ahead. Now, Lighthouse is a really
great tool. And, it’s really useful in
looking at numbers and seeing, hey, what score did I get?
And look at what audits passed and failed and it’s going to
give you a score, out of 100, for each of the different
categories. But I’m going to tell you right
now, don’t focus on that score. That score is important.
That score is good and getting 100 out of 100 is awesome!
If you get 100 out of 100, pat yourself on the pat.
Let’s be realistic, how many people got 100 out of 100 in
every class in school? Right.
Nobody gets perfect scores all the time.
And as you learn, as things change, your scores will change.
So, don’t expect to get one score and have it perfect.
Things will change over time. All right.
And the other thing is Lighthouse is a great tool to
go, yeah, this works well. But this isn’t the only tool you
should be using. You should be looking at your
analytics to see what browsers your users are using and
checking things out in those browsers.
Excuse me. Checking it on different
operating systems. Different ways of that.
With that caveat out of the way, let’s have a look and see how
we did. All right.
So, I told you, don’t worry about numbers.
But, um, 36 kind of sucks. All right.
So, we’ve got one red. We’ve got three green.
So, obviously, there’s a lot I can do to improve my Progressive
Web App score. Performance is at 79.
That’s okay. If I got a 79 in school, that
was okay. Best practice is 85.
Generally my goal when I’m working on a project is you want
to aim the a score of above 90. All right.
Above 90 says, yeah, you’ve probably done most of the things
that you need to do. Might be missing a few things
here and there, but there’s room for improvement, but you’ve hit
the major pieces. So, let’s have a look at why.
And our Progressive Web App score, we’ve got a lot of failed
audits. You can see there’s seven failed
audits there. There’s no Service Worker, it
doesn’t work when it’s offline, it doesn’t work when
adding a manifest or setting any of thetheme colors.
There’s a 207B we haven’t done, which means there’s a ton of
room for us to go do stuff. We did have four tests pass, woo
hoo. But really only three of those
so I ran. HTTPS, we don’t need it because
it’s getting it from your computer.
The page did load fast on 3G. We’ve got a good number of
things that we did right. Then, it also has some
suggestions on some manual tests that we should run, some things
that we want to check to say, hey, what else can we do that
Lighthouse itself can’t really test?
In terms of performance, I’m okay.
I could probably do better. But the big thing that I’m
seeing here is that time to first meaningful paint, so the
first time that I saw something useful on-screen was almost four
seconds. All right.
You can see the film strip view of what was going on up there,
so like at two seconds. Well, at about one second, I got
the blue background and at 3.3 seconds, I got the toolbar.
It was almost four seconds until I got something that was really
useful for me. Our speed index is almost 4,500.
That’s way too high. We want to aim around 1,250 or
so. In opportunities, it gives us a
bunch of things we can do to improve.
Hey, running this on local host. Not a big deal.
Reducing our render-blocking styles.
I could go and spend lots of time hyperoptimizing this, but
I’m not going to do this. We’ve done well on
accessibility. There’s a few things we could
change, but again, we’re above 90 so I’m going to save that for
another day. We’re doing well on our best
practices, 85. I could maybe do a couple of
things to get that up. Again, there are places where I
can spend my time to get the scores up and do better, so I’m
going to sort of not do anything intentional to change this.
So, the first thing I’m going to do is add service side
rendering so it will get the full page as soon as it makes
one request. It doesn’t have to go and make
multiple requests to say, oh, hey, I need this.
Oh, hey, I need that kind of thing.
script up. This works for this app and it
may work for your app. It may not so sort of take this
with, huh, should I be testing this?
Is this really important for my experience?
It is important to add in service side rendering to this
React application between the client and the server so we’re
going to share the rendering code.
I’m going to use express, which is a web server to serve my
files because it works really well with the React router.
It just kind of makes my life easy.
But speaking of React, other frameworks have similar ways of
handling serve side render, if you’re using angler, they have
their own way of doing that. You can just check that out and
go down that path. This is not just a React thing.
So, the first thing that I’m going to do — like I said —
I’m going to show you the code snippets because watching me
type gets a lot of boring and I have a lot of typos and you
don’t want to watch me typo. I’ve done it a couple of times
and I always fail and I’m kind of sad panda.
I’m going to add this service task and it’s going to go and
start the express server and that’s going to mean that I’m
pretty much serving this stuff directly from my server.
Then, in the server, I’m going to add React and render the
content into the React bit so I have to update my app to go and
add all the stuff I need for that.
And then finally, I need to update my index file.
So go in so that it goes and inserts the right HTTP into the
right place. Now that we’ve got this set up,
let’s take a look, in Chrome, and safari to see how it looks.
Now, I don’t expect anything to change in Chrome.
We’re rendering the exact same thing.
It looks pretty much exactly the same.
One thing you may have noticed is it did feel like it did
render a little bit faster but I’m not really sure.
That’s the thing about not using a tool yet, you can’t really
tell how fast it did. Safari, go run that, type in the
URL. And it works.
Right, because we got that fully-populated page there, even
So, let’s see how adding serving side rendering did for our
Lighthouse scores. Same thing happens again.
Go to dev stools, go to the the panel, right-click.
It did the same thing. You don’t want to listen to me
talk and babble. All right.
So, this is good. By adding server side rendering,
we saw a pretty good jump. Our Progressive Web App score
went up from 36 to 45 so we had a 10-point jump there and our
performance went up by — I can’t do math on stage.
11 points? Went up to 91, like, we’re above
that sort of magic bar that I was talking about earlier.
Let’s take a look at why. Since we only added server side renderingrendering,
I wouldn’t expect anything to change.
So, all right. That was nine points that I got
pretty nicely. In terms of performance, we got
a huge bump, right? We got an 11-point bump and
we’re above 91. The biggest thing that changes
is looking at that first meaningful paint.
Remember before, we were almost four seconds?
Now, we’re done under a second. We’re at 870 milliseconds.
That’s a huge drop, because we don’t have to make all these
multiple requests to get different things.
over here. We’re able to really speed up
that experience and our page speed index dropped as well.
It went from almost 4,500 to down to just about 2,000.
It’s still above our number that we’re really trying to aim for.
But it’s a lot closer, right, it dropped more than in half.
Opportunity to improve has now changed.
I could go through and fix a bunch of these, but 90.
Happy. All right?
Let’s go ahead and add a Service Worker to make sure it works
instantly and reliably. One thing I want to remind
folks, service Workers are great but they don’t work everywhere
so we want to make sure that we really focus on that experience.
So, when we structure this app, the way that I structured it was
to use the App Shell plus dynamic content model.
doesn’t change and I’m going to cache that and load that
directly from the cache every time.
I take the network out of the picture.
I don’t have to worry about, oh, hey, what’s the network like?
For the data, I’m going to use a set of runtime capturing
strategies. I’m going to get it and store it
so if I need it later, I can get it from here.
If I need something else, I’ll go get it somewhere else.
In this project, I’m use sw precache.
It is a precursing to Workbox. They are a library with a set of
pre-defined strategies that are going to help my write my
server worker code and allow me to go do a lot of things that I
don’t want to write a lot of code to go doing these things so
this is a great tool. Definitely worth checking out.
So, the first thing that I need to do in order to get my tool to
work is in port sw precache and I need to add new task that’s
going to generate the Service Worker for me.
You can see there, what I’ve gone and done and said, hey, I
want to statically cache these things
and then dynamically cache the rest of the content.
And then, in my HTTP, I need to register the Service Worker so
it’s there and works for all of the pages.
All right. So, now that we have the Service
Worker in place, let’s run Lighthouse and see what goes on.
We’ve seen this a few times so I’m going to skip the watching
of a demo. But, another bump.
By adding a Service Worker to precache our content, our
Progressive Web App has gone up by almost 20 points now, from 45
to 64. And we had a five-point bump on
our performance score. We went from 91 to 96.
That makes me a happy camper. We’ve improved the reliability
of our app by adding that Service Worker.
It’s now going to work and respond when the network is
slow, when the network’s not working.
Maybe if you were hear yesterday and the network was kind of —
so, good improved experience. It’s also helped to improve our
speed index and dropped it down to about 1,200-1,300.
So, just above our ideal number. After you’ve added a Service
Worker, I always recommend you go into your network panel and
take a peek there and have a look at the size and make sure
the server is handling all of the work you requested.
Those super-fast response times, everything is under 30 or so
mill seconds so all of my pages are getting served up.
While we’re in dev tools, this is the application panel.
This is the central place you want to spend all your time for
debugging and understanding Service Workers.
I can see my Service Worker is running without errors.
One little tip I’ll mention, I usually keep this open.
If I change my Service Worker and refresh my page, before I
refresh the page, I look at that number.
You can see, number 46 activated.
When I refresh that page, I expect to see that number
increment. I want to see 47, 48, 49.
I can force it to update, use a bunch of these checkboxes
to improve some of my debugging experiences.
The offline checkbox lets me simulate an offline network
failure. Hey, network doesn’t exist so I
can see how that exists. Upload is neat.
If you hit reload on the page, instead of doing a normal
reload, it goes and refreshes the Service Worker.
It goes and reloads the Service Worker.
Then, once the Service Worker has been reloaded and the new
one’s been installed, it reloads the page so you’re always
running with the latest Service Worker.
Bypass for network is useful if you’re trying to figure out why
you’re getting the old content. It skips the Service Worker’s
fetch handler and goes straight to the network to get that
request. Finally, let’s add the missing
application metadata. We need to control how it
appears on the phone or desktop when it’s added to the mobile
screen. The first thing I need to do is
add a manifest. This is one of those things that
tells the browser, hey, here are the names, the icons.
All of that kind of thing. I also want this to work on ios
and safari, too, so I need to add a set of metadata attributes
or tags to the page because I want to make sure this works
really well. So, let’s take a look at the
manifest and what I’ve had to do there.
The first thing I did was add an icon, a single 512 by 512 icon.
I’m not a designer and I don’t care if my icon isn’t pixel
perfect. Some of you in the room are
probably wanting to come up here and hit me.
If you want to do that, that’s awesome.
You just need to specify that in your manifest.
You can see the manifest where I’ve specified a full name of
the app, a short app, the different icons, the URL it
should start out. I’ve got that.
And finally, the last thing I need to do is add the link tags
for the icons so that safari has something.
Chrome also uses these when you think of your tabs and your top
bar. So, I want to add those icons.
And I also want to add the apple mobile web app-capable, yes.
I love that line, it’s so long, I can never remember it.
I add those two and the other thing I want to do is add a link
tag to my manifest and set the metatheme color.
With those added, let’s have a look with how it looks in safari
in ios. It gets added to my homescreen.
There it is in homescreen and it loads full screen.
Looks pretty much like I expect it to look.
So, this is pretty good. It looks mostly like a native
app. It won’t work offline because
safari doesn’t yet support Service Workers but they have
said they are going to and it’s coming.
So, sweet. Good.
Now that we have the Service Worker, let’s take a look at
Lighthouse and see how it looks. We’ll skip the video.
With those changes, we’re at 91 for our Progressive Web App.
96. I’m above 90 everywhere.
I’m a happy camper. The only outstanding issue is
that I don’t have a redirect from HTTP to HTTPS, I’m running
on local host. Our performance looks great.
Time to interactive is under three seconds.
First meaningful paint is under a second.
We’re down to one thing we need to change in our best practices,
we’re not using HTTP/2. Which is fine, we’re running on
local host. On dev tools, you can see the
manifest here. One pro tip, once you’ve added
the manifest, take a look and make sure all of the icons are
showing up properly. There are a couple of times I’ve
typod the icon name and I get one icon that doesn’t work.
You can see in here, your Service Worker, that it’s up and
running and you can dive in and see what’s been cached by the
Service Worker. You can make sure everything’s
been cached properly. There is one little bug that I
want to tell you about because this has had me hitting my head
against my desk. This is a bug in Chrome right
now. I think it gets fixed in 63,
which should come out early next week.
But that little icon in that, it is not live.
So what you just have to do is go right-click on that, click
refresh and then you should be able to see.
If you do go in and you’re like, why isn’t it showing anything?
Just refresh it and you’ll be able to see what’s going on.
I think it gets fixed in 63, fingers crossed.
If it doesn’t, tweet it at the Chrome dev team and say, when’s
this getting fixed? The more people who yell at them
about it, the better. All right.
So, as developers, you know, I’ve shown you sort of how you
can do this. Sit down with dev tools, run the
audit on your existing app and find out how it works.
Does it do — how does it score? And what can you do?
Where can you spend a little bit of time to increase those
scores? You can’t necessarily always do
it overnight. But if you start working through
and say, hey, I went to get this up, I want to get this, I
want to go add this, you’ll get a better experience.
The goal is not to necessarily, like — I always get in a little
trouble for this, but Progressive Web App are a great
marketing buzz world. They where building an amazing
user experience that’s fast, integrated and reliable and
engaging. And that’s what we want to make
for our users, is experiences that thigh love so they keep
coming back and buying stuff on our site or using our service or
doing whatever. We want to make sure that
they’ve got that great experience, so using Lighthouse
to help get you there is a great tool.
I’ve got a few links here with some more details.
You can go grab the code for this, for both the iFixIt sample
and for Lighthouse. Lighthouse is a great
open-source project. If you have ideas for other
audits, you can go to the tools and check those out — go to
their GitHub repo, see thoses and suggest things there or add
comments. With that, I will say, thank you
very much. I’m going to go over to the
Lighthouse booth and hang out there.
At the Lighthouse booth, you can run your test through
Lighthouse and see how it goes. Hopefully the network’s a little
bit better. With that, thanks, everybody. we talked to you more detail
about the TensorFlow Lighthouse, a design for the mobile and assistance. This is the
agenda. Why ML matters for mobile
applications and mobile developers. And this is the
slide I have used already in the keynote, I think. To tell the
difference between AI, machine learning and neural networks.
But the thing is is that neural networks is having the booming
or the breakthrough of the intelligence, so we’ve been
seeing a breakthrough around in the area of neural networks, and
we are spending of resources and time on developing new
neural networks technologies, and the idea is the same. So you
can see the network as a function. Just the function as
you’re writing in the Java code. Text any kind of a data and it
gives you an output. You can replace these cat-and-dog images
with your own images acquired from the mobile phone. For
example, if you have a mobile phone, you have the acceleration
sensors, you can use them as an input, and trying to use the
neural networks. What kind of a movement you are sensing with
those sensors on mobile phones? That could be the one use cases
of neural networks on a mobile. We’ve been using the neural
network, especially deep-learning technology for 100
production projects. Now, the traditional classic learning
algorithms but there is a deep neural network model already in
production in many places at Google.
Especially for the mobile use cases, such as the images, or
the OCR or speech-to-text or text-to-speech translation,
machine learning, it is important. Because it matters to lessening the
traffic and getting a faster response to your applications.
You can think of machine learning as being one kind of
the complex encoder for your raw data. For example, if you’re
taking images or photos with your camera, the easiest way could be just sending the
images to the servers. You can apply neural networks for the
analysis. But, if you could have learning in your network model,
inside your smartphones, you can understand the meaning of
it, the images. What kind of object you’re having not images
of the camera. For example, if you have the cat in a camera
images, instead of sending all the traffic images or complex
images to the server, you can send the text for cat – C-A-T – 100 times, or a
thousand times faster and smaller than the original image
data. And you can apply the same techniques to any other kind of
data. For example, if you have a motion sensor data from the
mobile phone, rather than sending the motel sensor
directly to the server, you can learn small machine learning
model running on the mobile phone to extract a certain
feature, or feature vector that presents the patterns or
characteristic of the motions, so that you can compress the
data much, much smaller. So the end result is
that you can get less traffic and a faster response on mobile
applications. And to do that, to implement the
machine-learning or AI-powered applications running on the
mobile phone, it may be the easiest and fastest way is to
use TensorFlow. As I mentioned in the keynote session
yesterday, TensorFlow is is Google’s standard framework for
building our new machine learning products. This is a
standard commission in Google, and created by the Google Brain team, and we have also in
tensorflow.org in 2015. TensorFlow is scaleable and
portable, so you can get started with it on your laptop,
windows, and Mac, try out sample code.
Then you can move to the production-level use cases by
using GPU, or 100GPUs. TensorFlow is scaleable so that
you don’t have to change major part of the protocol to learn
the distributor training on the large amount of data such as
terabytes of the data. Another thing, another benefit you could
get with TensorFlow is the portability, so after training
the TensorFlow model, you can bring the model which
constitutes tens of megabytes of data which can be in the
system, iOS about 25. Either the mobile phone doesn’t have an
internet or cloud connection, it can learn the TensorFlow model
inside it to do a smart decision or make a prediction on the data. TensorFlow is the most
popular deep-learning framework in the world right now. We have
many serious companies like Airbus and Movidi dius a
subsidiary of – those companies are actually using TensorFlow.
But if you want to bring the technology into your mobile
applications there are some challenges you have to face.
Because neural network is big compared with the other ML
models. With deep learning, you have to have multiple layers,
like tens of layers between input data, and outputting the
result, so the total amount of the parameters and the amount of
calculation you have to do can be big. For example, the
inception version three – Indeveloping v3, that requires
to have 91 megabytes of the parameters, and also, if you use
TensorFlow without any changes by default, it consumes 12
megabytes as a binary code. If you want to bring your mobile
application into production, you don’t want to have users
downloading 100 megabytes of the binaries when they are starting
to use your application. You may want to compress everything
into 20 megabytes, ten megabytes, or a few megabytes.
So we have to think about the optimisation for mobile
applications. Things like freezing graph, quantise.
Freezing graph means that you can remove the variables from
the TensorFlow graph, converted into consts. Usually people have
the weight and biases in the neural network as variables
because you want to train the model, train the neural network
in training the data. With you once you’ve finished training,
you can put everything into consts. Converting from
variables to consts, you can get faster learning time.
Quantitisation is another take for the mobile applications.
Quantisation means there is no – it means that you can compress the variable in
parameters into a small – fewer procedures. For example, we use
the 32 floating point numbers to represent any weights and
biases, but by United States quantisation, you can compress
that into an 8-bit integer. By using an 8-bit integer, you can
reduce the parameters smaller. Especially for the mobile
systems, it is important to use the integer numbers rather than
the floating point numbers to do the calculations such as
multiplication and additions between the vectors, because
hardwares for floating a point require of larger point. So TensorFlow already
provides the quantisation – quantising and
dequantising. With that, I would like to pass the stage to Anita
who will be talking about the TensorFlow lite. Especially
designed for the mobile applications.
>>Hello, everybody. We now understand that machine learning
adds great power to your mobile application, so, with great
power comes great responsibility. So, armed with
this power, let’s talk in detail about TensorFlow lite and see
what it takes to build an app using TensorFlow lite. So you
probably heard this many times, and now you already know, that
TensorFlow lite is a lightweight machine
learning library for mobile embedded libraries.
TensorFlow lite works well on small devices. We built it
because it is easier and faster and smaller to work on mobile
devices. Some of you who attended the labs earlier might
be wondering what is TensorFlow mobile and TensorFlow lite? You
should use TensorFlow lite as an evolution of mobile. Lite is
the next generation catered to be small in size, an app for
smaller devices. You will hear more about in in detail. So how
did we go about developing TensorFlow lite? We spoke to
the division team to find out what their needs are. We spoke
to our partners and Android how we can leverage custom hardware
acceleration, and most importantly, we listened to you
guys and took feedback from our developer community to see what
should be prioritised and how we built something really well for
mobile. We’ve tried to incorporate a lot of feedbacks
from all of you. We came up with three goals: we wanted to have
a very small memory and binary size, so even without selective
registration, we wanted to keep the binary size small, and we
wanted to make sure that the overhead latency is small. You
can’t wait 30 seconds for an inference happen by the time it
is downloaded and processed. models are quantised models.
This is the high-level architecture. As you can see, it
is a simplified architecture and worked both for Android and
iOS. This will be focusing mostly on
Android today. This is lightweight. It performance
better. So, to better understand how to write a model, let’s
consider how we build a model using TensorFlow lite. There are
two sections – one is the work station site and one is the
mobile site. The first step is to decide what model you want to
use. If you want the pre-trained models, then you can
skip the step. One option is to use a pre-detained model; the
other option would be to retrain the last layers like you did in
the code lab earlier today, or you can write your own custom
model and train and generate a graph.
This is nothing standard to TensorFlow lite, it’s where you
build a model check points. The next step, this step is specific
to TensorFlow lite is convert the generated model into a
format that TensorFlow lite understands. A prerequisite to
converting it is freeze egg the graph. The graph has the
variables and Tensors. You combine the two results and feed
it to the converter. The converter is provided as part of
the TensorFlow lite software and you can use this to convert
your model into the format we need.
Once this step is completed, the conversion step is completed,
you will have with you what is called a dot lite binary file.
You have the means to move the model to the TensorFlow lite.
You move it into the interpreter which executes the model using
the set of operators and it supports operator loading, and
without these operators, it is only about 70 kilobytes, with
all the operators, it is about 300 kilobytes, so you can see
how small the binary size is. This is a significant reduction
from what TensorFlow which is over one megabyte at this point.
You can also implement custom kernels using the API’s we’ve
provided and we will talk about in a few minutes. If the
interpreter is running a CPU, this can be run directly on the
CPU. Otherwise, if there is hardware acceleration, it go on
the accelerated hardware as well. The main components of
TensorFlow lite are the model file-format, interpreter for
interpreting the graph, a set of kernels, and lastly, an
interface to the hardware acceleration layer. So the model
as we said before, the TensorFlow lite has a special
modified format, and this is lightweight and has few
dependencies. Most graphical calculations are done by doing
32-bit floats, and most are trained for noise. This allows
us to explore lower precision nuclearnuclear missile —
numerics. Using lower precision canal can
result in an accuracy loss, so depending on the application you
want to develop, you can overcome this and use the – for
quantisation, it is supported in TensorFlow lite, and we also
have a flag-buffer based – a flatbuffer-based system. It is
an open-source Google project and comparable to protocol
buffers but faster to use. It is much more memory-efficient.
When we developed applications we always thought about
optimising for CPU instructions, but now CPUs are far ahead in
writing something more efficient for memory is more important
today. This is a flatbuffer is a cross
from the – similar so protobufs.
You can access the data without unpacking, and there is no need
for secondary representation before you access the data. So
this is aimed for speed and efficiency, and it is strongly
typed so that you can others. The next component of TensorFlow
lite is the interpreter. It is injured to work with lower
overhead on very small devices, and we have very, very few
dependencies. It is kept the binary size to about 70
kilobytes and 300 with operators. It has flatbuffers
and loads fast and the speed comes at the cost of
flexibility. TensorFlow lite only operates as a subset of
what TensorFlow has. If you are building a mobile application
and if the operators that is supported by TensorFlow lite,
then the recommendation is to use TensorFlow lite, if you’re
building a application that is not using TensorFlow lite, then
you should use TensorFlow mobile but going forward, we will use
TensorFlow lite as the main standard. Ops and kernels. It
has support for operators and use in some common inference
models. The operators are smaller like I said so not every
model will be supported. We have core ops.
They work in both and float and quashedised. These have been
used by the first-party Google app so have been beta tested and
we’ve hand-optimised for many common patterns and fused many
operations to reduce memory bandwidth. So it helps if there
are ops that are unsupported, we provide a C API so you can
write your own operator for this.
Finally, interface to target hardware, coming pre-loaded for
hooks with neural networks. If you have an Android release that
supports an API, then TensorFlow lite – if
you have an Android, it does not support any API, then it is
executed directly on the CPU. Neural network APIs are
supported by .1 in Orea, we announced a couple of days back.
To support hardware acceleration you can get from vendors, for
GPU was DSPs, and use TensorFlow as a core technology. For now,
you can continue using TensorFlow to write your mobile
app and they will get the benefits of hardware acceleration through neural –
for example, if a device has DSP, it can transfer and map to
it. It uses the neural networks primitives that are similar to
TensorFlow lite before so, its architecture for neural network
APIs look like this. There is an Android app on top. Typically,
there’s no need for the Android app to access the neural network
API directly, it will be accessing it through the
machine-learning interface. And the neural network rub times can
talk to the hardware abstraction layers and then
which talks to the device, and runs various accelerators. If
like I said before, if nothing is available, then we run it on
the CPU. How do you use the TensorFlow lite.
This is the high-level diagram we showed earlier and let’s talk
through the high-level code on what needs to be done to make
your model work with TensorFlow lite. The first step is
generating a model. This is standard TensorFlow code. If you
have written a model, use the same thing, the generator using
P. The next step is to convert into TF lite format,
a convert function, to convert it to a flatbuffer format. It
can be used in the command line after the model is generated but
it is the convenient practice to put it in your Python script
so you can find errors earlier if the model can’t be converted
for some reason. Again, some versions are not supported. You
know ahead of time, if you’re running training for days
together, and then getting out the model, it is not supported
by TensorFlow lite. Adding to your code is good practice. This
is a binary file, binary stream, so you can write to a
file and get a binary file somewhere. So once you have in
your app, you stay uncompressed because you don’t want to add an
extra latency. If you’ve compressed your file to access
it, the repositories is the main repository saying TensorFlow is
available, and the dependencies for your app will be TensorFlow
lite which has to be linked, which will be linked to the – we
provided with TensorFlow lite the demo application as well as
any on Maven. You can download and City integrated into your
app and see how you can use it. So, now once we’ve done this,
we’re now ready to use the model on the device. We have a Java API and you can
look up documentation on GitHub. You initialise the interpreter
with the model file and then run model in inference. If you have
custom ophthalmologists, we provide four functions for
custom opinions: one to un –
opinion — custom ops. You can refer to the GitHub
documentation it you want to see more details. So we released
TensorFlow lite two weeks back. As part itself TensorFlow lite,
we — as part of TensorFlow lite, we had built-in apps, the
demo app and we supported Mobilenet, float and quantised
versions. So, be sure to go and check out, and I have added
links here. Please feel free to refer to this documentation
TensorFlow lite and also download the code under contrib.
This is how the sample app, giving you some screen shots
from the sample demo app. You can download and test it for
yourself. You can see that, you know, I identified the objects
in the top three choices with a pretty high accuracy. Some
stats, interpreter without operators is 70 kilobytes. This
is 15 times smaller than TensorFlow. That’s why we say it
is suitable for mobile and embedded devices. It is four
times faster to load. So, between quantisation and float
our quantised ops and models are four times smaller and they’re
also 50 per cent faster to load on your device. Future work: we
want to improve and additional tooling and make it more
user-friendly. Like I said, we support only a subset efficient
ops. If you want to see other other ops we support, it is on
github. We want to extend these we want to target smaller
device, not just Android but Raspberry, Pi and other –
eventually, we want to do on-device training as well.
That’s about it. Thank you very much, go and go and check out
the TensorFlow lite code.
I would like to wrap up this session by showing the actual
use cases of the TensorFlow application running on mobile.
I would like to introduce an interesting
application. It is a production – a gymnastic
exercise scorer made by a Japanese vendor. This is
available for both Android and iPhone. What is the background?
In Japan, millions of people do the morning
gymnastic exercise by listening to the music on the radio, so it
is very common to do the morning exercises for
millions of people in Japan every day. This application
measures the movement of your exercise, the arms and
everything to score how well you have been doing the exercise.
They’ve built their own TensorFlow compiler, comparable
to the TensorFlow lite which we are now developing, but because
this vendor has a very talented expertise on TensorFlow, so they
were able to build their own TensorFlow compiler, that
changes the model from tens of megabytes to two or three
megabytes, putting the whole model running inside the Android application. This
compiler does many of the optimisation techniques I’ve mentioned like
quantisation, and the binary parts. Everything. So let’s take
a look at the action. [Video]:
can I switch to here? This is the
applications where you can choose the various exercises,
and I will be playing the most standard one. So this is the music. [Piano music]. embarrassing. [Piano music continues]. Just like that. Let’s stop! Oh. That’s enough!
Enough! [Laughter]. So now, the TensorFlow model is trying to
work out how well you’ve done with the exercises. You can see
the bar charts here. That’s the variation by the TensorFlow
model inside this application. It is a real thing, okay? So,
go back to the slide. >>You saw the score. The score
was made by the DOM. Finally model running inside the mobile
phone application. And so, the technology is now available and
you can do the same things with TensorFlow lite where I think it
is available. So I hope maybe coming this year and next year,
we will be seeing more and more Android and iOS applications,
real deep-learning model running inside it to solve real-world
problems. That’s it. Thank you so much. Anriudh irudh ANIRUDH:
LYLA: Welcome to the Instant App session. This session will
covering building an Instant App including core concepts, the
manager’s process – now, Instant Apps were announced at Google
IO 2016 but they became generally available at Google IO
2017, and now any of you can build and instant app. First,
let’s talk a little bit about what Instant Apps are. Instant
Apps are native Android apps that require no installation.
But instead of me waiving my hands around and using a lot of
words, let’s take a look at what that looks like. search for the show on Red Bull
TV called the Crevasse. I’m immediately taken into an
immersive native app experience. In works from any URL that you
only, whether from a Google search, as this example, or from
a text message from a friend. Now, Instant Apps work on
Lollipop and higher, so, if you have a phone running L or
higher, you can search it out, search for New York Times cross
words and click on the card that has the Instant App. Instant
Apps are supported on over 900 million devices. In addition,
many of our early-access partners created Instant Apps
and they’ve already seen increases in session duration
and engagement from the Instant Apps. So, I’m going to go ahead
and take a dive into how you would set up your app to be
ready for Instant App. Now, the whole point of Instant Apps is
to decrease friction for users. We wanted to be so that you can
click on the link, and you get into this native app experience.
We don’t want to be showing them some long permissions-style
dialogue when you do this. Instant Apps show contextually
relevant positions during run time. In addition, pour Instant
Apps, you will be obviously mapping URLs to different
activities of your apps and this is exactly what app links do.
We use app link functionality to enable the – #ly, app links
allow you get rid of the disambiguation dialogue. App
links require two things to get working: first, you need to make
intent filthers in your manifest file that associate
those activities with the URLs. Secondly, you need to generate a
digital asset link JSON file to verify that you actually only
on your server. There is an app links assistance tool will will
get you started with boat of these. It generate the XML
intent filters and the digital JSON file. Both an links and
runtime position are for 21 and above. To achieve this, there’s
a runtime that is it managing your instant app. The run time
backwards those functionalities for L and higher app. So, if you
actually want to see that runtime running on your phone,
it’s called the Google Play services for runtime and shown
on your phone as shown here. So, runtime permissions and app
links with the two major technologies that your apps will
need to be able to support before you can start building
Instant Apps, but in addition, there’s a checklist of sort of
smaller preparations that you need to make. So, for example,
some of these things on the checklist, you need – they have
a subset of internet permissions and need web requests. There
are a couple of other restrictions that are required
that you should look over. I’m not going to list those out now
but we have awesome Instant App documentation so you can go to
the site and get that full checklist for yourself. So we’ve
made our apps ready to become an Instant App. Now I want to
talk about how Instant Apps are actually structured. So what are
the central goals when making an Instant App is that you want
to make sure when you click on that link that you’re getting to
your app fast and it is not dragging out a really long time.
Now, you’re probably used to having a single module for your
app – probably called “app” which
contains all of your code and resources. Within your code, you
probably have separate features.
So, for this example, I will be taking you through a fictional
travel app called Banjara. It lists the
nearby attractions. You can view specific details
about an attraction, get a map to go where it is. You can write
reviews if you happen to be a fan of that attraction. So each
of those three different activities flows can be thought
of as a separate feature. Features are discrete
experiences where the user accomplishes some sort of
meaningful task. For Instant App, a feature must be
associated with an activity or activities. Banjara has
essentially three features: the first one is browsing a list of
nearby attractions; the second is posting a review about the
attraction; and the third is getting that detailed
information about what attraction. So a user could use
any one of these features independently, and they would
still be able to do something interesting and useful for them.
Okay, so, features probably don’t have completely separate
code. They will most likely share code. For example, been
probably want your activities to have the same colours and logos
and not switching those things up, so they will probably share
common styles and graphics. Similarly, you will have some
shared utility classes such as a class to set up dependency
injection. Finally, there probably are some common
libraries like the support library or maybe architecture
components that you will be useful in all of your features.
For all of this shared code, I will be calling this code
“base”, and that will become clear why I’m doing that soon.
So here it is all mapped out. We have our single modules, but
remember inside our module, we have these sort of different
sections code for the features, and the base feature. What are
the core principles behind Instant Apps is this: if your
users only need to use one feature, why are you wasting
their time and data downloading all of the features? Another
way to think about Instant Apps is the user clicks on a link and
downloads only the compiled code and resources for the
single feature they need at that moment. If your friend wants to
send you the details of an attacks, you can click on a link
and via Instant Apps, you would only download the Banjara
details feature, and you wouldn’t have to download the
feature for attraction or the feature where you’re writing a
review. You can do all of this without having to fork your
app’s code so you’re not going to create an extra little mini
app on the side. One of the other core principles is that
Instant Apps build from the same code base, using both for the
single and installable an. This makes maintenance easier because
you don’t need to maintain a completely separate code base,
but it ensures that your Instant App feels the same as your
installable app because it is very important because
essentially, it is the same app. So what you’ll do instead of
forking your app is to take your single app module, and you’re
going to separate out your code into different modules, one for
each feature. You will very cleverly call these modules,
feature modules. Then you’re also going to take that base
code that I caulked about before and you separate it out into
another module called the base-feature module. Now, you’re
going to make two additional modules on top of that, and
they’re both for building the app, so one of those modules is
going to be called your installable app module meant for
building the installable app and the other module will be an
instant-app module. We are going to take our one-app module
world and break it into into six different modules. Now, to
properly configure all of this and build your instant app, you
need to learn about plug-in types. Hewer are our modules
again. Each of these modules has its own gradle file and each
will be use be a different gradle plug-in. The installable
app using the installable plug-in. The application plugin
should be familiar to you. It is that standard plugin you’re
using to build your APK. The purpose this module is to create
the AP KFOR your installable app. The gradle file will
identify the app ID. For example, you might have some
offline mode feature that doesn’t make sense in the
Instant App context but you can also have its purpose just be
for installing your installable app. Because of building an
installable app, it needs dependencies on all of the other
feature modules. Okay, so, in comparison, the new Instant App
plugin creates an instant app. It basically Danes nothing. It
doesn’t contain any code or resources, and it has a blank
manifest. But it does depend on any of the features that you
want to include as part of the Instant App experience. The
feature modules use a different plugin called the feature plugin
which is new. The feature plugin is what makes Instant
Apps possible. It is cool. It will compile differently based
on what’s building it. So if you’re building a feature from
your installable app, it will compile as if it is an Android
library. If instead you compile from an Instant App, it is going
to compile as if it is a separate mini apk. Your module
will contain code specific to that feature. The base-feature
module is different from the other feature modules. It uses the same
feature plugin but there’s a flag inside of it which
identifies that it is a base feature module.
Importantly, you can only have exactly one base feature module
per instant app, and, again, it is not really a feature, it is
this shared – it is this location where you have shared
resources, code and libraries. All of your other features have
to depend on it because it has the shared code resources and
libraries they want. And importantly, it propagates the
app ID out to the – in practice, what is
going on when we build the instant and installed app and we
put it up on the Play store. Let’s look at what happens when
we make an installable app module. You build an installable
app. This buildings acAPK, similar to the APK you’re used
to – or exactly the same as what you’re used to – and you upload
that to Google play and that is completely normal. All the –
the feature plug-in decided to do that, and they are included
within the overall installable app. Now the user goes to the
Google Play store and they download your app and they get
that exact same APK on their device with all the features you
put in it. Okay, so let’s loan at what the instantapp module
does. Now, this looks similar but you might notice there are a
couple of extra file extensions stuck in there. What the
instantapp module is doing is creating these mini APKs for
each of your feature modules, and then it zips all those up
into a single zip file. This is what you offload to Google Play.
The behaviour of the user’s device is this. Whthey click on
the instant app link, two things are down loaded. The base
feature APK which comes along with your different features. If
Fred tells to you check out a nearby attraction and sends out
a learning, you will only download the detail and the base
apk. Let’s say you want to write a review about the
attraction, and you go to Google Search and you click on the
review functionality. So you already have the base feature
APK, so the only thing that is downloaded at that point is the
review feature APK. As you can see, your user is downloading
only exactly what they need, and that’s because you’ve modularised your app. I
want to create a single-feature Instant App. Here’s our end goal
which is a little bit complicated.
This year’s where we are starting with a single module
that has all of this different code inside of it. So, a much
easier intermediate goal is to start by just building those two
additional modules that make your app. Then you could have
all of your code in a single-feature module. Now, this
isn’t doing that cool thing where it selectively downloads
particular features yet, but it does allow you to get the two
build processes working and then resolve any issues like
implementing the correct permissions structure. Okay, so
I’m going to blackbox all of this, and here’s the same
diagram, hiding that, internal code of the feature module
there. Okay, so, again, all of these modules have gradle files,
and the gradle files are how you set up the configuration and
get everything to build properly.
Let’s take a peak into the installable apps’ gradle file.
We use the application plugin. It will contain the application
ID as I mentioned before. And it is going to depend on that one
app feature module because it needs that to make the APK. So
nothing too surprising there. Let’s go go ahead and take a
look at what is going on in our instant app module. This is
basically the same as the installable app but we’re using
the instant app plugin but no app ID but we are getting that
same dependency to the feature because we want that feature to
be part of the instant app.
Let’s look at what is going on in our apps build.gradle file.
This is the one feature we are building right now. So this is
going to use that new feature plugin and it has a certainly
application dependency on the installable app. The reason for
this is that it gets the app ID from the installable app and
that’s pretty much why that’s there. You will notice that we
have this base feature configuration set to “true”.
Remember how I said that all instant apps need exactly one
base feature module? In simple version of the app, we only have
one feature and therefore it follows that this must be the
base feature. Okay, so, with that, you have your first
instant app running, and, again, I’m going to call this a
single-feature instant app because it only has one feature.
So if I did this for Bajara, I can see it runs as an instant
app which is tool. There’s another reason to take this
instant step which is that it can help you flag libraries or
permissions that might not be compatible with Instant Apps
before you go through the process of modularising your
code. The real power comes in from the modularisation process
in splitting up your codes into separate future modules. Here’s
the end goal again, and here’s where we are right now before we
built the additional gradle modules. At this point, you want
to start modularising. Where do you start first? You should
identify what should be in the shared base feature module, and
you separate it out. The base module will contain the classes,
resources and libraries that the other features, the list
review and detail, rely on. You can start extracting other
features one at a time and testing how it is going as you
go. Now, note that you don’t need to have all of your
features be part of the Instant App, so you could decide that
you just want to extract the details feebling and only
include that as the Instant App. And the only difference would be
that you would require an install as soon as they wanted
the review sheet order list feature. But if you do keep
extracting features, you will eventually get to this point. I want quickly to talk
about tool and Android Studio to talk about this process and
it’s called the refactor modulars tool. You –
you select the different classes and then you right-click those. You will go to refactor
modularise. This will move those classes and files over to a
different module but also going to tell you the dependencies
those classes have so it will tell you other dependencies of
different classes and other dependencies on different
resources and it will give you the rabbit to selectively decide
what you might want also to move over with it. Okay, so now
let’s take a quick look at the gradle files
for this base app. Here’s our base feature gradle file. It has
the feature dependency. Still has the base feature set
to true. Okay, now, in the that we have that application project
dependency on the installable app module and that’s to get the
app ID. We then have additional feature-project dependencies on
each of the different feature modules, and the point of that
is simply you have that app ID, and now the base feature is
responsible for propagating that app ID to each of those
different feature projects. This ensures that your entire app,
your Instant App and installable app have the same application
ID which is pretty important when you upload to Play. Let’s
take a look at one of the more normal feature modules that
gradle file looks like. This does not contain that
base-feature configuration because it is not a base
feature. Feature modules will always depend on the
base-feature module but not have any other dependencies on the
other base feature modules. You can have – in there as well.
Okay, so at this point, you have both your installable and your
Instant App modules which depend on all of your feature modules. You have your feature modules
which all depend on your base module. And you have that one
single base module which has all the shared code and resources
that you need for your feature modules. Finally, that base
module is responsible for propagating or getting the app
ID from the installable app and propagating it out to the rest
of the app. That’s what the full instant app structure looks
like. Okay, so now it is my pleasure to introduce Anirudh
Dewani who works on the partnerships team and he will be
chatting about managing module size and some additional Instant
App capabilities. Thank you. about managing module sizes.
Now, in development mode, are no constraints on the size.
It’s great for first refactoring efforts and making
sure your app is compatible request Instant App Sandbox.
Okay, understanding module sides restrictions. Your module needs
to be lean. The max size for a module is 4MB, and a download
module – we will discuss that. Nevertheless, you should aim for
less. You should aim for less to improve delivery speed and to
improve user experience. Okay, so, the
download, we said the total download bundle side should be
less than four megabytes. That means that the feebling that you
are trying to run are downloads which have the activity being
addressed bit URL plus the base feature module should be left to
four megabytes. It has all the required common dependencies,
and it is always downloaded unless it is already cached.
This is where reactoring into feature models is useful. In
this example, we have a base model with three megabytes in
it. This is one megabyte for each featured module. To have
four features in our app, we can have seven megabytes of total
app code. But there may be some cases where you want a little
more room in your feature module. For example, a large SDK
or a third-party library. A good example of this would be a
payment module. Suppose you have a big payments library. What
you don’t want to do is pull up the payments library to the
feature module and create more room to add functionality. In
this case, when we pull up a one-megabyte library, our base
is now two megabytes, and this gives us two megabytes in each
of our future modules. The total bundle size still needs to be
less than four megabytes. With Instant
Apps, we can use a new publishing track called the
development track. This development track lets you
build and perform end-to-end testing of your instant apps
rapidly to testing of your Instant Apps rapidly to match
the velocity of development. This module size restriction for
the development track is ten megabytes. In addition to this,
we also have the alpha and beta production tracks where the size
limit is four megabytes. So, here is a blog post that talks
about the best practices for managing your download side for
Instant App modules. Let’s discuss some tips and tricks for managing your size. Use
ProGuard to shrink your app and to remove any unnecessary
classes. You should start with the installable app first and then use ProGuard. Use the APK
analyser and start with the biggest chunk. Use the unused
resources lint check to see resource that is are not being
used in your app. Investigate and remove as needed. A couple
more tips here: use WebP to optimise image assets and you
can use downloadable fonts. The APK analyser, it
understands your build artefacts and able to show details, and
the screen shot on your left, you can see on our build
artefact, a zip file showing us the feature APKs inside and you
can drill down and do the feature APK to specific
resources and investigate this site. Okay,
with Instant Apps, we introduce configuration APKs which are
similar to multi-APKs but with one key difference: they don’t
have any code. They only carry configuration resources. We
support three currently. Architecture, density, and
languages. By using configuration APKs, you can
bundle resources and specifically or fewer device
configuration and only deliver the required APK to the device.
ﾣplaces your app’s code and device diagnostic information
into one APK and device-dependent and resource
into separate APKs. The greatest configuration block is there.
You use – you specify the dimensions that you want. On the
right, you see a screen shot again from the APK analysing.
You see a bunch of APKs generated apart from the base
and a detail, so a lot of configuration details there,
carrying the resources for that specific configuration. We only
calculate the size of the largest configure APK plus the
base for each supported configuration. Now let’s talk
about some extra features, APIs. We stressed that the Instant App
and the installable app should be the same app. However, there
may be some cases where your installable app offers some
functionality that’s not possible with the Instant App. A
good example of this would be offline video. In those cases,
sue may want to from the users too install the app. Replayed it
easy for users to install your app when using your Instant App.
This can be done by placing a button. Let’s check the code for
install button. By default, our button is hidden in the layout. On the top, you will see
dependency being added. That is the Instant App library that has
the install APIs. Now, what we do in the code is first we check
for Instant App context, so you want to unhide the button only
when you’re running the Instant App. The API there is Instant
App. You check if you’re running an Instant App and you hide the
– unhide the install button. You use the install API to
present to the dialogue that installs the app. Okay. Now, we covered how to install your app.
When doing that, you want to make sure the transition is
seamless from the user moving to the installable app. It is
important to keep user context and migrate user-generated data
over to the installed app. The Instant App A – API
has a cookie that you can retrieve later. On Android 8.0
and above, you can directly use framework – for cookie API, the
specific chats is PackageManager. On 7.x and below, … . Let’s
go over for the cookie API. When running as an instant app, you
check if your cookie checks the allowance. This is done by
calling the getInstantAppCookieMaXSize. You
use it with the said Instant App cookie. The app is installed.
Now you’re in the installed state. You retrieve the data
back and retrieve the cook which by having the cookie. Then you
have user context and restore where the user was in your
instant app, sorry. Also, recently, we added the ability
to let the – there is a “try now” button that launches your
Instant App. It is right next to the install app, removing any
friction. For integration with launcher, rerequire apps to
provide a consistent landing screen. This is the default or
home experience for your app. We do this by adding a default URL
attribute. In the XML block you see a metadata tag. This is the
home or default experience for your app. This activity should
be aural-addressable and support browsable categories. Okay, finally,
snatches are all about removing friction, so we recommend that
you use smart lock for identity and Google Play billing for your
app. Final thoughts to summarise. Think of your app as
I said, of features. Each feature lets the user
complete a task. Start with a single feature app to quickly
prototype and then start carving out features one at a time. Use
the APK analyser and other tools to minimise your – and
make your modules cleaner. Provide a consistent landing
screen for your Instant Apps, and finally make the transition seamless from –
using the cookie API. Further information here on
Snapchat apps, that’s the home page. Link to our samples and
code labs. A couple of useful resources for — that can help
you manage your module size, and related talks on apps and
modularisation. We’re really excited with the experiences you
will build. Thank you. [Applause]. Good afternoon,
everyone. Welcome to the second-to-last talk. My name’s
Mike McDonald, a product manager on the Firebase team and joined
with two of my colleagues, Todd and Dan. We are here to do a
brief overview of how to build an app with Firebase. Many of you are
probably multilingual, English and a number of other languages.
I’m terrible with other languages. I speak English
passably. I travel a bit for work so it’s difficult if I want
to go to a restaurant and order something, I kind of point at
things on the menu, or maybe I use Google translate and try and
translate it but it would be nice if I could talk to people
and have them understand what I’m saying. This is a problem
now that science fiction in particular has been trying to
solve for decades, right? If you need a number of science
fiction novels, if you’ve ready Hitchhiker’s Guide to the
Galaxy, there’s a device called the back back if you put in a
your ear and everything everyone is saying is translated into
your language, and vice versa. Science has gotten pretty close
to that. I saw a demo a couple of years back of a peer-to-peer translation
tool. The kids were speaking in English and the kids in South
America heard it Spanish and vice versa. Some might be
thinking, hasn’t Google already solved this problem? So the
Pixel buds came out a while ago. Tap it, and it will translate
into your language. Both of those took years, and we
have 30 minutes to build an application. Let’s see if we can
do that much faster using something we all have today: we
have smartphones, so we’re going to build an Android and iOS app
where you can select the language you want to speak and
hear and we will do the translation automatically for
you. So how are we going to do this? If you didn’t come to
this talk, you might start building something that looks
like this. Is the traditional mobile application: you have
your mobile app that talks via rest requests to some Python or
Django or PhP server that proxies requests out to
translation APIs, databases or your file director storage
systems. Unfortunately, there are a lot of problems with this.
Who is going to kind of manage and maintain those servers? Who
is going to get you the server in the first place? What about
uploading files over flaky networks? I’m sure we’ve
already been having conference Wi-Fi issues. How will you make
sure those files are uploaded appropriately? What about
authentication. What about performance and scaleability and
if we take this app and give it to you guys and then it goes
viral, is everything going to stale? Is it going to keep
working? Who knows? Luckily, everyone came to this talk, so
you’re going to learn about how Firebase solves all of those
hard problems. Unlike the apps we saw on the previous page, Firebase elimitation knights
the middle-tiered server. We handle things like clients out
of authentication and flaky networks, so there’s no need to
build resumable protocols or do your own upflows. Firebase
scales automatically, so the app that you build for a prototype
will scale the production the next week. Smart clients handle
all of those hard problems so you can focus on building
applications that your users love. So in case you haven’t
been paying attention over the last two days, and you’re like,
“What’s Firebase? Firebase very briefly is
Google’s mobile platform. It provides tools that help you
develop your applications on Google Cloud platform,
understand what your users are doing in those applications, and
grow and engage user base. But we can’t build an app using all
19 of these features now. I will turn over to Dan who will pick
and choose a few of these as somehow how we can build our
app. Over to you, Dan.
DAN: So a core tenet of Firebase is that you can pick and choose
the pieces that you need. I’m not going to explain those
orange circles, I’m going to pick the three that we need to
build this app. The first one is a highly scaleable object
storage that lets you upload x bytes of
user-generated content. We will couple with this
Firebase auth that allows those complex social flows and session
management with ease. The third one that I’m going to use is a
NoSQL documented database called Cloud firestorm that stores and
links data. Let’s see how we can piece these three things
together. Since we need to record audio and share it with
other users, we need to store the file somewhere secure that’s
accessible across multiple devices. Cloud storage is the
right solution for that. It comes with Firebase SDKs. It is
easy to select a file on your device and upload it directly
from anywhere. We have client-decide SDKs in five
can meet you where the app is. These are smart and handle poor
mobile connections, pausing and resuming connections making sure
your file is uploaded. Once that file is uploaded, you can
download the content directly from any cloud storage bucket
across Google’s global network. The same robust support from the
Firebase SDKs enable to you to do this. So, we have uploaded
the audio file to the cloud, how do you know who is allowed to
have access to it? That’s where fair base auth comes in. It
lets you log in your users with a common social providers, or
the only email and password system or any provider using our
custom authentication experience.
Instead of spinning up servers to handle multi-legged flows,
our simple clientside SDKs handle all the flows and
securely authenticate your users. We’ve also recently
introduced phone number support so you can SMS your users to
sign in. This feature’s really popular in India and other
emerging markets. Firebase also provides an open-source
pre-built set of UI components like the above that give you
access to a seamless signin experience right out of the box.
It doesn’t matter if your users are on Facebook, GitHub, or use
email and passwords. We always create a user record with a
single ID across all their accounts. Again, this happens
entirely on the client side. There’s no need to buy your own
aut integration. By using the Firebase auth, the users are
able to log in files, no matter where they came from. Now that
we can up and download files securely, how do we synchronise
the other metadata letting other users know there’s a new
translation? Luckily, Firestore has this covered mere. Cloud
Firestore is Google’s newest database, a real time
document-based database. We will be storing data in several
collections: an uploads collection becomes a
transcriptions collection, and a translations collection.
Uploads is going to hold a link to the file we uploaded to the
cloud storage, transcriptions is going to handle the text
representation of that file, and lastly, in translations, we are
going to handle all the translations for that transcription. We are see
how these all connect shortly.
Unlike other databases which are often request-response changes
in client style, cloud Firestore pushes notify you any time data
collection, and we will real changes. Let’s switch time.
Attach a listener to a to code to see how these pieces
help Mike build an app. The app we’re
going to build has a log-in button. It selects the language
you want to hear, a record button, and finally, a text
field showing the translated text. On the developer side, we
are given an audio file, and we expect the snippet of text that
we can play out loud. We’ve already imported Firebase using
pods and gradle and configured Firebase within the app. We need
first to upload the file. We create a reference
that points to the file and add associated metadata. We take
the local file and upload it. Then add listeners on success
and failure. A failure, we print
out an error message. On success, we write the file
metadata into cloud storage. Auth is critical
to any app and it’s really hard to get right. Since Mike’s a
product manager and wants this app done yesterday, we’re going
to cheat a little bit and use an open-source library we built
called Firebase UI to implement the offloads. The first thing we
do is set Firebase UI up and configure it to use Google. We
will add business logic and check if the user is already
logged in. If they’re not, we will launch that log-in flow,
otherwise, we will miss the new translations. We need to listen
for changes when a new translation is available. Since
we already wrote the metadata to the database after we uploaded
it, we need to listen for new translations in our database. We add a new database
to the document listener to get new translations. In the body
of the listener, we will fetch the specific language from the
database. After that, we will simply use the text-to-speech
APIs on each device to play the translation. Let’s try that out.
>>Testing, one, two, three. >>So we’re not hooked up on the
back end right now, so we will have microdot translations for us. — we will have Mike do
the translations for us.
>>See if the internet works.
Now, Mike is in a scaleable back end. What we’re going to do in
a moment is hook up the backend pieces that will do the
translation. It will give Mike a chance to give us a manual translation.
>>Not getting it for some reason.
>>Okay. Live demos! I heard machine learning is pretty hot
these days. So let’s see if there is it anything Google
Cloud can do that can keep Mike from becoming our backend. Todd,
do you have any thoughts? TOD: Definitely. Let’s write some software to
fill the gaps. Mike wants a universal app.
Let’s do that in the trendiest way possible. Let’s add some
machine learning but designing and training our own models
takes a lot of time, and that is what he is talking about. We
want to get things up and running fast, so that’s where machine-learning
APIs come in. We can pick to a big men knew
available in the cloud form which solves a lot of problem,
including two approximate our app faces today: one of the
problems is transcribing voice into text, and for that would be
we will use the Cloud speech API, and for translating one
language into another, use the cloud translation API. Our
problem’s solved, right? Well, not totally. How are we going to
use these cool APIs and get them wired into our app?
Traditionally, Firebase apps are server less means there’s no
back end that you could bake it into which is great, but as your
app gets more and more complex, you will start facing
challenges faced by a lot of app companies.
One of those challenges is secrets. Users can learning lot
from the files that make up a lot through the release code,
through your game or API keys like the ones that we are going
to use nor the cool AP – for the cool APIs. Also, the resources
on the phone, all the CPU itself is fast these days, the battery
life is only limited and, if you crunch on so many numbers or
phrases. Finally, code sharing: about being a mobile developer
is having to implement the same functionality across multiple
platforms. Our app faces all three of these challenges, and
we can mitigate them by moving the secrets and the code-intensive functionate into
the cloud. We only have to implement it once rather than
for each platform. If we dipped into our old bag of tricks, we
might solve it this way: spin up servers on compute engine,
create APIs by Django or node.js. That creates more complexities.
Before you know it, we are back here with a lot of complexity,
not where we intended to be. Instead, we are going to use
Cloud functions giving us the minimal glue we need to deal
with our app server challenges. Instead of showing you in a
silly diagram, we will prove it by building the app live before
we will write a cloud function which, unlike a mobile app,
doesn’t need much to get started, just a few variables
and a function stub. It we switch over to the demo.
>>First, we start by adding function that triggers when the
document is written into the database. From there, re-run a
function that will extract the metadata from that actual file.
This includes the language, the encoding, and the sample rate.
Then we will send that recording to Cloud’s speech API by
pointing the Cloud speech API to a specific location in cloud
storage. Once we get a result, it will include the transcript,
and we take that transcript and write it back into the database.
Once we have the transcript, though, we still need to
translate it. We’re going to do the same thing as before but
with the cloud translation API. We will attach another function
that triggers when the transcript is written into the
database, and then when it runs, it gets the language and we
iterate across the languages that our app support and sends a
request to the cloud, translated API for each one. As
they’re translated, we write them back to the database, and
once all the translations are complete, we signal that the
function is complete. So now we can go back to the slides. So basically, a
deploy is simple. It takes a few minutes. We know your time is
precious. Here’s one we prepared earlier. You write Firebase
deploy, and pretty much we do the magic. Now I’m going to hand
it back to Mike. MIKE: Thanks, Todd. The demo has been acting
up a little bit so we’re not going to do that, unfortunately.
What is going to happen again is upload the fires, it goes
through Firestore, all of the magic in our functions happen,
and then gets the language spit back out.
That is pretty cool. I promise it does work. We’re having a
little bit of flakiness over the internet, but you can actually
go on GitHub. If you look for 0 to app universal translator, you
can get all the code and build it yourself. So we just built
two apps in about 20 minutes complete with our own backend.
If you hadn’t come to this talk, you might have built something
that looks like this: your application talks to that
server, proxies everything out, but that costs a lot of money,
takes a lot of time and requires that you get paged at two in
the morning if the server goes down.
You want us to only that management. Dan showed us how to
replace that with Firebase and allow your application to talk
directly to our various services. In order to build
serverless and management-free applications. Then Todd came
you mean to how to add – by adding the rest of Google cloud
platform and cloud API to securely and per formantly
secure the rest of that application. Just as easy as it
was to integrate both of those APIs, it is equally easy to
additional Google cloud platform services including other
pre-detained MLAPIS or your own ML or additional cloud products.
As you saw, it is really easy to chain events together to turn
complex processes into really simple applications. Thank you
all for coming. If you have additional questions, we will be
right behind at the speaker zone area, and thank you all
very much and enjoy the rest of the talk. Enjoy the rest of the conference. Hi, everybody. I’m
Taylor Savage, working on our many different open-source
weapon developer products we build as part of the Chrome
team. I’m here today to talk about one of the products we
work on Chrome in particular that we’ve been working on for
quite a long time. It’s the Polymer project. Also to talk
about some of the underlying new exciting web technology that
makes Polymer possible. That’s web
components. I want to talk about some of the problems we set out
to solve when we went and started the Polymer project and
we everyone here well knows, the
expectations of web developers are extremely high. We are
expected to build sites that work along browsers, across all
different kinds of seasons that run at 60 frames a second,
immediately responsive that load slow and fast on flaky
networks, that consent pushing notifications. All this
difficulty in building a modern mobile website today. But the
tools we are given as web developers haven’t caught up to
this challenge. The web was initially designed for
documents. So the primitives we get on the web are tags for
headers, lists and paragraphs, things great when you were
trying to send a document over the wire but don’t really work
for the applications that we’re trying to build today. So we
don’t get primitives, for example, for things like UI tab
strips. The UI tabs are across detective modern mobile
applications but this is surprisingly difficult to
achieve on the web and this is something that should be
first-class and easy when we try to build websites. Over the
years, we’ve come up with lots of different ways to basically
effectively munge together these low-level document primitives
to end up with something like UI tabs, and the way primarily
and some frameworks will have us overload existing tags and
scaffold out any UI elements we want to build using HTML, and
to understand the existing HTML and create a tab strip. Others
will add more together and customise your tab strip with a
and not worry about the DOM it is going to construct. Others
will be some sort of mix. They will mix – they will nest HTML
Frankenstein monster. These different approaches are really
ultimately solving the same kind of problems: we are trying to
build a reusable UI component, but these approaches are very,
very, very different. It can sometimes feel there’s a new
exciting. This is not a bad thing. Only the web platform
affords the kind of scale, flexibility and diversity to
frameworks. This is it great, really good. For us on the web
platform team, at Google, it is indicative of a larger problem
and it causes a lot of problems for all of us as a web developer
as well which I will get into a little bit. So on the Polymer
project on the Chrome team we recognise that this explosion of
frameworks was awesome, it was a testament to the power of the
web platform but also an indicator that there was
something wrong. There was some underlying problem that needed
to be solved by all these different frameworks in the web
platform itself that we were all looking towards Java frameworks
to fill. When we set out on the Polymer project, rather than
didn’t really do, we set out to fix the underlying platform
problems itself, to actually build a better web, one that is
more conducive to application development, helping to solve
the problems that the frameworks are trying to solve in the
platform itself. So what do I mean by that? That sounds hard
and complicated. Where do we start? We started with kind of
which again solved many of the same problems in slightly
different ways. Many of us on the Chrome platform team in past
very well acquainted with some of the problems that this
proliferation of frameworks can cause with web development. So
there are two really main costs that expose themselves as we
to solve by baking in but primitives to the web platform
itself. For the first cost is a lack of interoperability. What
do I mean by that? This lack of interoperability is a cost that
lot of value to make web development more efficient and
many bring their proprietary stacks of functionality and all
component model. That’s the crux allowing you to build a
standalone reusable component. The components you build in the
framework are fundamentally tied to that framework stack and
that run time. This is fine if you’re one developer building
one site, but if you’re working with a bunch of different teams
to each one kind of use want to use their own paradigm reach or
framework, you’re stuck. You can’t share components at all.
If you’re one developer that bit the site once and you want to
move to a more modern framework later on, those components you
bit for the old site are useless. You can’t port them to
the new framework. So this makes for a really incredibly high
switching cost when you want to switch frameworks, and it also
makes that decision of what framework should I use to be
kind of the most important decision that you have to make
for your entire project, and you have to make it day 0, which is
really scary. It also fractures the component ecosystem across
all of us web developers. This is a costly problem and
decision. Not so much for a single developer but as your
team scales and as the ecosystem scales, it becomes an aggregate
cost. The second we set out to solve was one your users pay
which is extra overhead specifically on mobile. So the
mobile web, as we know, has incredible reach but it reaches
into places where your users might have slow connections,
flaky mobile connections, expensive mobile connections or
be running your website on slower, older devices.
Furthermore, user expectations on mobile devices are extremely
high. Users expect these applications to run at 60 frames
a second, to load extremely quickly, so it is incredibly
important that we are able to build sites that can load
quickly on mobile even despite these difficult network
provide our component model, the browser must download and then
framework before your web app is even close to running. This
is fine on big beefy machines with wired internet connections
but doesn’t fly on flaky mobile connections. We’re taking the
through the narrow pipe. The cost of extraction becomes
extremely real when you get out into the real world and start
loading the frameworks on real mobile devices in real networks.
So our mission on the Polymer project is to make the web
platform itself a more capable development platform. So you can
build web applications with way less overhead, with way fewer
costs, and then take advantage of what is already sitting there
directly in your users’ pockets on our users’ devices in the
form of the browser that they have on their phone. So our
motto for the project overall is to use the platform, to use
what is already installed on the users’ device, the browser. So
there is some good news. The web platform itself is actually
incredibly powerful. I wanted to show you a little bit by what I
mean by that, by taking a look at an element, an HTML element
many of us are familiar with which is the humble “select” element.
We’re probably familiar with select. It provides you with
this dropdown menu that you can click on and it’s an incredibly
simple element but provides this incredible amount of power all
hidden away behind the select tag. If you put an empty select
tag on your page, you get this button-like thing with the
arrows and nothing else. You will notice this one tag gave us
quite a bit of UI. We didn’t tell how to draw on
the screen, or put those arrows in there. We got it for free as
part of the select tag. It is composable. We can add options.
We simply compose new option children under the select
element itself, and now we get this more complicated UI. We get
the ability to mouse over and the new option that the user is
hovering over highlights when they tap that option, it becomes
the selected option in the dropdown. You can use arrow keys
and things like that to navigate the select dropdown.
Again, all of this for free by using this little tag. It’s also
declarative. Beyond children select element, we can ruse HTML
attributes to make certain options selected or disabled. We
can change the behaviour of this component with the
declarative attribute. It is forgiving, so, if we mess up,
and if we put the wrong child in our select element, the browser
doesn’t crash, or web app doesn’t crash, it will ignore
that and keep moving on. It is accessible to a screenreader by
default, so it gives you nice attribute handles to make sure
this element is fully accessible, and it is also
programmable. The document object model, the DOM, provides
information using properties on the component; you can listen to
events that it fires when state changes within the component,
and you can call methods on the component if you want it to do
something. You’ve got this full rich comprehensive API for free,
just by putting this tag on your page. All in all, select is
a pretty amazing element. You can reuse it all you want, you
drop it on your page and not worried about it impacting other
things on your page. This is the UI component we all want
think, okay, let’s take that select Clement which again was
great for documents in simple forms and let’s build a whole
bunch more elements for more mobile-friendly UI components.
On the web platform team, we might be tempt ed to do that, to
add bunch more HTML elements for select. Indeed, in the past,
that has kind of been how we’ve gone about adding functionality
into the web platform. A few years ago, a bunch of browser
vendors got together and wrote what was called the ex-tense
accessible web manifesto basically an agreement saying we
as browser vendors can’t anticipate what the next
high-level abstraction or component you’re going to want
is going to be, so instead of doing that first, we’re going to
take a step back and make low-level primitive that make
the browser itself extensible so you as a developer get the full
power that we as the web platform creators would have in
order to create your own HTML elements. Rather than us
defining the language for you, you get to build the language
itself. So, we want to be able to provide all of the power that makes “select” would be to
you the developer in the form of new low-level APIs. What
would you need in order to build your own HTML element like
“select”. You need to give it a prototype and let the document
know you’ve defined this new HTML element.
You want to define the UI. You need to be able to define that
yourself and encapsulate it for your particular element. You
need to be able to abstract a way how it manages its children
and abstract away any styles for the element how it works so it
doesn’t impact the rest of your page when you use your own
element, and you want to let it declare and handle its own
dependency. If the element relies on other elements, it
needs to be able to load those itself. All of these features
I’ve described, that make build your own HTML possible, are part
of the new web component standard. This is a set of APIs
that let you the developer extend the language of HTML
itself and build your own HTML tags. I won’t go too deep into
web components. It is a trove of information there but made up
of three new primitives: custom elements, which let you define
your own HTML elements and tags, and add behaviour on to the
you define your element’s internal layout in an inert and
easily clonable way, and shadow DOM gives you composition to
encapsulate any styles you may want to apply to your own
element. For the rest of you following web components might
have noticed I’ve left something out here which is HTML imports. Traditionally, HTML imports have
been part of web components. They are conceived as a way of –
since browser vendors are shipping modules as away from
code to otherowed, we are leaning more towards using AS
modules in that mechanism. I’ve left imports out of here to
avoid any confusion. Web opponents is help really to
solve the two major costs that I talked about earlier, so
they’re naturally interoperable. At the end of the day, all
manipulating DOM at the lowest level. At – DOM itself becomes
your interoperability layer.
Furthermore, all this powerful component life cycle and
encapsulation are now baked directly into browsers, so you
don’t have to ship down an expensive bespoke component
model on the form of a framework. All of that component
power is sitting right there on the browser with the web
platform native APIs. So what does it actually look like to
use a web component? So, on the Polymer project, we’ve built
out a large set ourselves to provide the look and feel for
Google’s material design UI paradigm, and we built all these
out as web components, so you can drop something like a paper
button or tool bar under your page and get this nice-looking
mobile UI component like you would with a select element. I
give an example of how you might use one that we’ve built. If
you wanted a responsive tool bar, for example, you could use
our paper-toolbar element and this provides a tool bar that
sticks to the top of the page and you put things inside the
tool bar that makes it more interactive.
You load it by using the definition we’ve built and
dropping a paper-toolbar tag directly into your document. To
give it a title, you can nest a div as a child of that
paper-toolbar and put the title and you get this mobile-friendly
responsive UI component. The custom elements are really all
about composability. If you wanted to add a hamburger menu
to the left part of your tool bar, you can nest another
component we’ve built inside your paper tool bar, set the
icon to menu, and that will land the hamburger icon, and you get
a paper-toolbar just declaratively using the web
components like you would write normal HTML. That’s pretty
straightforward. Let’s jump into something more complex, a more
complicated use-case for web components. So let’s say we
wanted to express an entire API as an element. So, for example,
let’s say we wanted to add a marker to a Google map and drop
it on our page. If we wanted to do this using the Google maps
API directly, it takes a whole bunch of code in order to set
up, and it’s kind of messy. In the world of web components,
though, we can encapsulate all of this code and behaviour into
a single tag, the Google map element. We can load its
definition and put Google map on our page and you’ve got Google
map. Let’s say we wanted to centre the map at a specific
latitude and longitude. With custom elements, we can do
exactly that by using attributes. Just like attributes
and HTML, we can put a lat attribute and a long attribute
and give it values centring the map wherever we asked it to. We
can also set a zoom level as an attribute to zoom into the map.
Adding an icon is intuitive in the way you use HTML as well.
You simply add a Google map marker element as a child of the
Google map, make it draggable, give it a title, and, again like
that, you get a map marker on your map. Really nice
declarative way of putting together a Google map. We can
express the full complexity of the Google maps APIs or whatever
part you want to express as attributes on our elements. It
is totally up to us what the attributes look like, and what
behaviour they impart to the element itself. We actually
prototyped a whole bunch of components out there for other
types of services as well, and you can experiment with your own
services and other things you want to encapsulate as a web
component. That’s kind of the power of using web components,
extracting really complicated or simple, like UI elements behind
HTML tags. So what is Polymer?
The web components APIs I talked about, the shadow DOM template,
are low-level APIs, very, very raw. You can use them directly
you end up writing the same boilerplate code over and over
again. We pulled out some of the common boilerplate code for
creating web components into a library which is called Polymer.
The Polymer itself is just an opinionated library that
provides a lightweight sugaring layer that makes it easy to
build web components in a don’t repeat yourself way. Let’s take
a look at how you use Polymer to build a component. These
examples come from the new Polymer 2.0 version of the
library which uses a really nice ES6 class-based syntax.
Let’s look at a very simple just totally basic element, just a
custom element that just says, “I’m a custom element”. So
create an element with Polymer, you create a new custom element
class that will be the class for your element and you extend Polymer.element.baseclass which
gives you the functionality you might need to construct your
element. You define an? Is” method. We’re calling the tag or
our element custom-element. In this particular element, we will
set the custom element to be “I am a custom element” when it
boots up. We have to tell the browser that I’ve created a new
– we will use the custom elements which gets you give a
custom element registry a tag name for the let me which in our
case is the result of custom elements, ands class that will
represent that element. Just like that, when we put a tag on
a page, we will get “I am a custom element.” So, fairly
straightforward. You can start to see sort of the power that we
might be able to get by using these custom elements APIs, and
on polymer. Let’s say we wanted to make the elements’s UI nicer
by giving it a template. So associating an HTML template
with a custom element is a common paradigm so we make it
trivial with the polymer library. This provides a notion
of a DOM module which is effectively just like a bucket
of HTML that you’re going to want associated as the template
for your element. Whenever your element is on a page, this
template will provide that element’s UI. Inside the DOM
module, you want to give it an ID that matches the tag name of
your custom element so that Polymer can – and you want to
give a template child. You will define any of the HTML that
makes up the common user interface for your particular
element. In this case, our element is going to be very
simple, it will just be a paragraph that says I’m a DOM
element. This is my shadow DOM. Again, we will define the base
class like we did in the last example and register it with the
browser. When we put the DOM element on the page, Polymer
itself will stamp out that template and we will get I’m a
DOM element, this is my shadow DOM. Again, fairly
straightforward. But we want to make sure that the behaviour of
our specific element won’t leak out and disrupt the rest of the
page. We can’t have any style we apply to our template
accidentally applying to the document. We want our element to
be handle children elegantly. So this is where shadow DOM
comes in. Now, Polymer automatically puts all of the
contents of an element’s template inside of an
encapsulated shadow room, and this is – shadow root. This is
powerful. This provides for an element with safe bubble or
space where you can put DOM or CSS and all of it will be
encapsulated inside that shadow root. I will show you what I
mean. Let’s say we want to build a picture frame element which
just provides a little grey border around a specific image.
So you can see here that inside our element’s template, we give
some style to just the basic div tag. We give it a grey curved
border. Now, normally, if we provided the style to div, this
would totally screw up our entire website. Every div on the
page would have this border. Now a shadow d DOM gets
encapsulated away inside our particular element’s shadow
root. Also, shadow DOM gives us the power of projection. It lets
an element define where in its template any children in that
element should be effectively displayed. This is the insertion
point for your particular element and we declare this
using the slot tag. So, in this picture frame case,
in our template, we have a div that we’ve styled above. We want
to effectively project through any children of our picture
frame element so that it looks like it is appearing inside of
that div. So, when we use our picture frame element, we will
have our picture frame tag, and as a child, have any image tag
we want, and it will get projected in that slot and get
that nice grey border painted around it. This will be
encapsulated away. So there’s even more. Polymer provides some
basic functionality for doing declarative data binding inside
your elements’ template. Say we wanted to create a name tag
element where the actual property on the DOM note itself
will be a string, and that will show up inside the text, this is
owner’s name tag element inside of our template, so in our
Polymer template, we can use this curly-brace syntax and
reference the owner property on the prototype. That is what will
be referenced, the same property came. And so, in our
elements’ constructor, we can set this owner to be Daniel,
and, when we add the name tag to the page, the result will be
this is Daniel’s name tag element. If we change the value
of the owner property on that specific element, that value in
the string there will update automatically. Polymer will
handle all of that. So I’m only scratching the surface of all
the things that Polymer provides. There is a lot more
handy functions and things in Polymer that make it easy to
rebuild a reusable component. Now, the ever-important question
is what browsers does this work in? Polymer and web components
in particular have seen incredible browser uptake. The
specks they’ve been working on for a biochemical but a new
version came out a year ago with V1 and most major mobile
browsers now natively support web components, so Chrome,
Safari and Opera support web components in the latest version
of these browsers. Firefox started working down their own
web component and it’s high priority on Edge’s road map,
user voice request for features. And fortunately, we provide a
set of web components polyfiles. ﾣ – polyfills. You can still use
the web component’s APIs. They will be slower. Those polyfills
allow you back to IO11 and Safari nine. It’s broad for web
components thanks to native support and polyfills that help
support older browsers. The great part is as newer browsers
start shipping native reports, when Edge comes out with
support, they will get that support for free, and the
polyfills will drop away. The rest of the code will continue
to work and you won’t have to load that extra polyfill code.
So Polymer web components are sear serious acomingses in the
wild, well over 4 million pages. We broke our monitoring system
so we can’t tell you how many pages but it is definitely over
4 million. We also use polymer inside Google, so there are 500
Google projects using polymer. And then other major Google
products as well as major brands around the world are using
Polymer for their UI construction today. In fact –
you can learn more about the Polymer project if you’re
interested at the website. I would recommend out the
growing catalogue of web opponents, so as of last count,
over 1,300 unique components built by developers all around
the world that you cangenous download and use directly in
your applicants with this great functionality. That’s
scratchings the surface. I encourage you to – that’s
scratching the surface. There’s also a Polymer app tool box
which is our set of tools and components that make it easy to
build a Progressive Web App. If you’re getting started building
a Progressive Web App, the Polymer website is a great place
to start. That’s it. Thank you very much. You evening. Thank you for joining me in this session. I’m Sayeed
Malik. I’m super excited to be here. And to be talking about
some of the most common issues and misconceptions relating to
search engine optimisation, or SEO, that can affect the
visibility and appearance of your pages in Google search
results. What I have for you in these slides is basically three
things: first, we will look at the general mistakes that
publishers make, or some of the things they tend to ignore, and
some of the misconceptions that they have. Then we will move on
to look at things that are related to mobile-friendly
sites, and then I will leave you with a piece of advice on how
to hire an SEO, meaning, if you want someone to hire SEO, what
are the are things you should look out for and watch out for if? Some of the
obvious things, these are some of the common mistakes that
publishers make when they do SEO, or common things they tend
to ignore, thinking they’re trivial, and some of the
misconceptions they pace their SEO strategies on which are
their misconceptions, they will fail them. What I’ve done in
this section is basically collected a bunch of statements
from Google’s guidelines and also some of the statements that
we hear from the SEO community and the publishers, time and
time again, and some of these statements are true, and some of
them are not. Let us see how many of these you
get right, okay? Okay, the first one: descriptive page
title meaning the using which you implement using the title
tag and your H and the metadescription
tag that you use in your HTML documents, are important for
better searching ranking. How many of you think this is true?
Many who think this is not true. A few hands.
All of you who said this is true, were correct! Yes, this
is absolutely true. If you think about it, although this may
look like a trivial thing, you know, just a title and a page
description which is not visible to the users on a web page,
right, these are not visible to users, these are important. If
you think about it, when a user is in a search results page,
these are the only things or the only window they have into the
content of your website. When they are on the searchable page,
site, when it appears in the search result, this is the only
window they have. This will allow them to judge whether they
want to click through to your site or not. That’s the decision
point. So this is based on this.
If you do not basically give good creative and useful titles
and descriptions for your pages, even if you manage to rank well
in Google search results, you may lose traffic. This is not to
say that Google always takes the titles and the descriptions
that you give or you provide, Google may also take the title
and the description from, for example, the content of your
page, right. It tries to make the description
and title of your result as relevant to the query as
possible. If you do not have a lot of content on your page
itself, it becomes even more important for you to provide
good titles and descriptions. Now for the next statement. But
content is still king. Yes, of course you have your titles and
your descriptions, and this, and, everything you’ve done for
SEO but if you do not have good-quality useful content,
organic original content that can give value to your users, do
you think you’re going to do well in the research results?
Do you think content is king ultimately? Yes, how many of
you? How many of you think it is not? I will put my hand down
because I do think it’s extremely important. I still see
some hands. Some people think content is not important which
is no, no, no, absolutely wrong. Because content is definitely
the king, because, if you think about it again, why would
somebody like to come to your page? What are they coming
there for? Not for to see how well you have optimised your
pages for search engines, they’re coming there for
basically the content of the page, right? They want to
consume the content. If you don’t have the content providing
value to them, usefulness to them, then of course they will
not like your web pages. When they don’t like your web pages,
Google would not like to show those web pages to use users. Of
course, right, because they don’t like it. Now for the next
one. There is a minimum and a maximum limit of works for an
article to rank better, right? Meaning if your article is too
short or too long, it may not do well in search results. How
many of you think this is true? How many think this is not
true? A very, very mixed response.
50-50. I can say. I can see the results. Of course, this is not
true true. If you think about it, the information can be
delivered completely in one paragraph, for example, and the
other information, you may need an entire page to explain,
right, for it to be complete. Instead of counting the number
of words in your articles, you should be focusing on are you
providing complete information to users? Organic, complete,
and authoritative information to users is what actually matters,
not the number of words in your articles. Now, for the next
one: there is an optimal key word
density that can help rank better for the desired key word.
Meaning if it is a page and you’re targeting the page to
rank for a particular key word, now, this word has to repeat a
certain number of times on that page otherwise Google may not
think pick the key word from the page and associate your page
with about that key word and may not rank that page for the key
word. How much of you think this is true? Wow, a lot of hands.
How many think this is not. For the people who said that they
think is not true, you have good news: you’re right. This is not
true. Okay? So there is nothing called “optimal key word
density”. I know there were a lot of hands that were raised
and these are the misconceptions I wanted to bust in this
session. There is nothing called “optimal key word density”. You
should focus on the user. Of course, the key words are
important. You should step into the user’s shoes and think about
them and think what are the dewords they would type in the
Google search when they are trying to look for content that
you are going to write? Right? And consider those key words and
try to include those key words in your consent. Definitely. For
sure. But overdoing that, it is not going to help you. Indeed,
it can basically harm you, you know. Imagine reading a
newspaper article which is full of key word stuffing, you know?
You see one word repeating many times, would you like to read
such kind an article in a newspaper? Absolutely not. Try
to make it ago natural as possible. Do not focus on the
density or the derivation of the words, the proximity and things
like that. Focus on your user and make the article sound as
natural as possible and give them value in your articles. Now
for the next one. It’s super important to fix all the 404 errors warned. How much of
you use search console here? Not a lot of you. How many of
you – I mean, so the rest, I’m assuming, you don’t use search
console. So search console is basically a free tool, and if is
the only tool in the world by the way that can give
information about your websites in relation to Google search as
in problems that Google is countering, indexing your pages,
and other issues that it is encountering that can children
your ability. This is the only tool that can
provide you information. It is absolutely – I would highly
recommend to you use this tool, if we have a link to this tool
in the coming slides. Note that down. I would recommend you to
design up for this tool. When you sign up for tool, it shows
you the 4-04 errors that Google encountered trying to access so
might have found links to your pages which may not be existing
but some errors there, and such showing the search. How many of
you think fixing all these 404 errors is mandatory, you must,
otherwise your ranking will get affected? Many hands again.
Okay. The good news is no, you don’t need to, right, because
for the pages that do not exist, they do not exist, right? It
is right for them to show a 404, and that is what those pages are doing. But Google is
somehow coming across the links pointing to those pages and
trying to crawl those pages, and showing you those warnings. If
you think this a page need not exist on your website, you can
simply ignore the 404s. What happens eventually is Google
tries to crawl these pages a number of times, and eventually
to understand that these pages do not exist forever, they’re
gone Trevor, and stop showing you these errors. For the pages
that do exist, right, but Google is still showing 404, you
should check those and fix them. Here is another one: Google’s
algorithms are way smarter for me to do anything to help it
understand my images better. It means like they’re very, very
pro fish should not, understand all kinds of images. When you
put an image in your HTML document, there’s nothing else
that you need to do, just provide the image, and Google
will take it off. Tell me tell me if you think this is true.
Finally, only one hand! Oh, two hands! Which is good news for
me, because, yes, most of you think this is not true, and, of
course, this is not true, while Google algorithms are like, yes,
they are great, they’re brilliant, they can understand
images, of course, a lot, but there are times that they cannot
understand images as well, and they cannot understand all types
of images. For example, you’ve been on had a holiday to an
obscure place and taken this picture of this obscure place.
When you put this picture up on an HTML page, Google may not
necessarily identify that place whereas simple as it may not
different Senate between a cup of coffee and a cup of tea when
they’re identical. As simple as that, right? These kind of
things. What can you do to help Google search understand your
images when Google search is able to understand your images?
It is able to show it in the search results. It is important
for you to help Google understand these images. What
can you do to help Google understand your images? Make
use of the alt tag in your HTMLs, right? In the image tag,
you have alt attribute. Try to describe your images there with
a couple of key words will be right? You can also provide the
caption for images wherever applicable on your web pages.
You can also name your files instead of naming is it 001.jpg,
you can give it a name, proper name, that describes the image
itself. These are the places where Google can take hints from
to understand your images better. Okay, that brings us to
the next part of this session which is commonly issues around
a mobile-friendly site. These are some of the common issues we
come across when you try to crawl them, index them. Google
faces a lot of trouble and may not be able to crawl-index these
sites. One of the most prominent problems is blocked
resources. Meaning while a lot of resources
allow their pages to be crawled and indexed, sometimes, they
knowingly, or unknowingly, tend to blocked the attached
resources. The resources that are attached to this web page,
is it important for you to make these available to Google bot as
well? Google tries to understand your content and
it happens in your browsers. It tried to do the same thing with
your pages to get all the content in the context to
understand them better, so this is how it tries to understand
your pages. When you block your CSS and your website is a
responsive design, Google may not know that yours is a
designed website and it may not think your website is a
mobile-friendly site. If you’re providing content to users
content and link to that content. It may not be able to
show it. Will become important for you to make these resources
as well available to Google bot. Unplayable content. This is
another issue we’ve seen on major web size as well by the
way. You know, when you use plugins that are not supported
by mobile browsers, for example, this makes for a very, very bad
experience for even the users and if makes, creates problems
for Google – and using flash on your pages, Google may not be
able to get to the content that is where that is within the
flash conflict. It may not be able to read through it, right?
Your users were accessing your web pages from the mobile device
may not be able to access it so it makes for a bad user
experience. These are some of the thanks can actually affect
your SEO as well. Interruptive interSTISHLS. I’m –
interstitials. I’m sure we’ve experienced this. You suddenly
see a pop-up which is covering an entire page, blocking you
from accessing the main content of that page, or to do the
desired action on that I think, until you do something, you
close it, download something or things like that, right? This
is against Google’s policy guidelines because this does not
make for a good user experience.
What the suggestions said if you have to show something is use
smaller banners without blocking the main content of your pages. Slow pages HP this is again more
greyhound the user experience perspective. I’m 100 per cent
sure that each one of us in this room have experienced this,
especially when you’re on your mobile devices, on the go, you
try to access information and that page doesn’t load. Surveys have
shown that 53 per cent of the people tend to leave your
website if it takes just more than three seconds to load,
right? You heard it right. 53 per cent of the people tend to
leave your website if it takes more than just three seconds to
load. Unfortunately, surveys also show that 75 per cent of
the websites that are there today take more than ten
seconds. That’s ironic, right? You would definitely want to go
back and check how you can improve the speed and when
you’re building your mobile pages especially, try to build
them for flaky connections, interrupted connections and
mobile users. The last one here: faulty redirects. This is one of
the major problems that create problems for Google bot as well
as users, right? What we have seen is a specially applicable
especially when you take the separate URL approach meaning
you have example.com and m.example.com as
your mobile website. When someone is accessing your mobile
version or a desk version for mobile phone for a desktop, you
need to redirect your users, and these redirects do not function
properly. If they do not function properly, they create a
lot of problems for Google crawlers in understanding your
content, crawling and indexing your content as well as creating
a frustrating experience for users. In an ideal scenario,
what should happen is all your desktop pages should redirect to
the corresponding mobile version of your pages, right?
But what we have seen is sometimes when you do not have,
for example, if there is a page that is only available on the
desktop version of your site and this particular page is not on
your mobile version, people tend to redirect, trying to access
the mobile version of this page to the mobile home page, or they
just give a 404, right? Both of these practices are not
great, because users do not understand what a 404 is and
they’re more confused when redirected to the home page.
They try to go back and that creates for a very, very bad
user experience. Even when Google is trying to call your
pages, if you’re redirecting it here, there, and everywhere,
Google gets confused and may have a difference in crawling
your pages. Those were the, with the related to mobile-friendly
sites, common issues. Here are some basic
tools to make sure that Google understands your website as
mobile-friendly and to help you make your websites
mobile-friendly as well, right? On the left, you have testmysite. It enter a URL
giving you a binary answer whether Google considers your
website is mobile-friendly or not. You open it on a mobile
device, it works fine. At the back end, some of the resources
also be highlighted using the tool. The other tool that we
have is the mobile usability report. This is basically
accessible through search console. You may not be able to
access this. Then, this is gives you a more holistic view about
your entire website of the problems that Google encountered
while it was trying to understand crawl your mobile
friendly site. Right. There are more stools of course and time
does not permit us to go through each of these, so I will leave
you with the names of these tools so you can go back and
story. I’m sure they were covered in the other sessions
you have attended like Chrome experience report, Lighthouse,
web test.org. They can help you increase your page feed, make
your websites mobile-friendly and things like. When talking
about mobile-friendly websites, our discussion is incomplete
without mentioning mobile-first indexing. How many have you
heard about that here? Very, very few hands. Okay, what is
mobile-first indexing? So far, Google considered your desktop
version of your websites as the primary version, so it relied
upon your desktop version to get information like the primary
content of your web pages, the mate data and the – metadata and
the structured data. This is changing. Golden Jubilee will
consider the mobile version of your pages and will rely on your
mobile version to get this information. Meaning if you do
not provide this information, the full primary content, the
metadata and the structured data on the mobile version of your
site, Google may not get all the information, right? It may not
be able to show your pages as well as it could if you did
provide. So it becomes now important for you to ensure that
all of these three things – primary are between your mobile
site and your desktop site. If you’re using a responsive
design, for example, there’s not much for you to worry about or
even dynamic serving approach if you have taken your
mobile-friendly website. What is happening in the responsive web
design, for example, you’re changing the format of the same
content. It remains the same but you change the content, the
format of content, to serve to desk tomorrow users and to the
mobile users, right? Content is not change, metadata is not
changing. Even the structured mock-up that you have remains
the same. There is a kind of nothing to worry about. But if
you have taken this approach, then of course there is
something to worry about. Go back and check that your mobile
version is equivalent to your desktop version. On the websites
which provide the content in multiple languages, like
multilingual websites, when you’re implement be the rel equals hreflang elements – the
website link today each other and the different – a language website does not
cross-link using the rel=hreflang tag,
rather they are link to go the desktop versions of different
languages and mobile versions of different version languages for
mobile. As for the rel=canonical tag, using the
example.com where the example.com is the primary
version, the canonical version, and then the n.version is the
alternate version, you would have tags like your desktop
version is pointed as canonical and this is the alternate, you
can keep that tag as it is. There is no need to change that
tag. Okay, and now, quiz time again: now, here, there are two
statements here: first one reads Google recommends responsive
web design. How many of you think this is true? Okay, like
50 per cent of you. I’m assuming 50 per cent of you don’t think
this is true. How many of you think responsive web design is
preferred by Google’s ranking algorithms? If you have a
responsive web design? Surprisingly, more hands up
there. Okay, let’s see. Well, while the first statement is true, the second is not. Google
does recommend responsive web design for various reasons,
right? It does recommend responsive web design but that
does not necessarily mean that Google also prefers responsive
web design by ranking, right. What this means is respective of
whatever technology you’re using, whatever approach you’re
using, either your website has taken the responsive web design
approach or the dynamic serving approach, or the url approach,
we treat them all the same when it comes to ranking. Absolutely
the same. Google algorithms do not look into the background
technology when they’re ranking pages, they look at the content
of your page and the 200-plus factors we have to rank your
pages, and whether your website is mobile-friendly or not. As
far as your website is mobile friendly, you’re doing good, no
matter what technology you’re using. Okay, that brings us to
the last part of this session which is hiring an SEO. Now,
this begins with a quiz: here’s another statement. It is better
to hire a Google certified SEO or an SEO agency and to check
their certification hallmark to verify. How many of you think
this is true? Not many hands. Okay, how many think this is not
true? Okay. Some more hands. Which is kind of great but for
the people who think is true, unfortunately, this is not.
There is nothing like a Google SEO certificate fiction.
Absolutely nothing. If somebody tells you that they are a Google
certified SEO, turn your back and run away, right? They are
the absolute frauds. We do not have anything like Google SEO
certification. We may have other certifications – ad words and
things like that – we will never have I think Google SEO
certificate fiction. Hiring pan SEO is a big decision, you know?
It comes with potential advantages, of course. If you
hire the right SEO. If you hire a rogue SEO, then you can
potentially have a lot of damages. You can even lose your
current ranking and money, and a lot of things. You know. So it
is a big decision. No how can you determine an SEO is right
for you or an SEO agency is right for you or not. There are
some questions you can ask. Things like these: first of all,
can you show me examples of your previous work? Are there
really established SEOs or very knew see ceases? They’re going
to disappear in a couple of months?
What happens with these SEOs is where genuine SEOs can work for
longer periods of time because they’re not violating any of the
search quality guidelines and not deceptive, so they’re not
frauds, so they stay longer in the business, so, if an SEO is
staying longer in the business, and they’ve good reputation and
quality work to showcase, then there is something that you can
look forward to and move your conversation on with. Also, ask
them if they follow the Google search quality guidelines. If
they say no, then they’re not the people you should be hiring,
because if they don’t follow the Google search quality
guidelines, then, you know, with Google search quality team may
even take a manual action on your website which means they
can remove your website from Google’s index entirely, or
maybe push down your website into the result, so that can
also happen. If they’re not following Google search quality
guidelines, they’re not the people you want to work with.
The other thing you can also ask is what kind of experience do
you have beyond SEO in general marketing and stuff like that.
SEO is not something stand alone. It can’t be done stand
alone. It is part of the entire mix, you know? Like somebody’s
doing SEO alone, then you may have – you may want to be wary
and then going into the details of what they’re doing and things
like that. It is the marketing agency that specialises in
different types of marketing things, social media marketing
here and there, and all sorts of things, including SEO, then of
course you can proceed with them. And also, one of the
important questions you should also ask is what kind of results
can I expect if I hire you. If they say we can bring you with
number-one position for this key word in two months, again, turn
your back, run away, because they are the absolute frauds.
No-one on the face of this earth can get into your number-one
position of and ever, okay? So, if anybody promises that thing
to you, know for sure they are frauds. What kind of experience
do you have in maestri? This is an important question as well.
This is to understand if they understand your business, your
business objective, your business goals, and your users
as well. You know, do they understand them? Well or not?
If they do not, they’re not the premium you want to work with.
If they understand those things well, if they have experience in
your industry, then those are the people that can potentially
help you. What kind of experience do you have in
developing international sites? This is especially relevant for
people who have multilingual sites or sites that are
targeting multiple regions, they already have an experience this
there, you can probably hire them. What sort of techniques do
you use? If you say no, the techniques we use are top
secret, then, again, the same formula: turn your back, and
run. Okay? Because when they’re not transparent, there is
assist wrong. There is definitely something wrong. A
genuine SEO agency SEO can be 100 per cent transparent to you.
They can show you everything they’re doing with the website.
And how long have you been in business?
As I said earlier, good SEOs tend to say longer in the
business because they’re not frauds so they tend to be there.
Finally, can I communicate with you? Are they really open and
transparent if? Are they going to share everything they’re
going to do with your website, right? If they say they can
share some things, those are the no the people you want to work
with. That’s all I have for you. If you have any questions, I’m
available there, a couple of my colleagues are also available
outside this hall, and your questions are most welcome.
Thank you very much. [Applause]. ♪ ♪ ♪ ♪ Test ♪ test.
We’ll be right back # We’ll be right back. >>Good evening. Are you all
awake? Are you really awake! AUDIENCE: Yes!
>>Are you tired? AUDIENCE: Yes!
>>Thank you for coming. The tireness to come. How many are
from the DDG community. How many are there from the Developers
Student Clubs? Awesome. How many of you are certified … .
Nice. Thank you very much, folks. This is a community
session, so we will spend the next community 30 minutes with a
series of lightning talks. My name is Karthik Padmanabhan. I’m
representing the team and trying to moderate the 30-minute
talk which will be quick, snazzy in our-minute format. We
will try to get through this with a lot of speed.
Before we get into the talks, I want quickly to set the context:
for every particular life cycle in a developer, we have some
Google developer programmes available to you. If you’re a
student in a college, you can check out the developer student
club. You can become a community leader, get more members into
that, and you can get going with that. Or, if you feel like you
want to be employed, and you want to get the right kind of
employment, you can look for the certified programme off there,
and some of you have checked that out. Then, you can get into
a stage where you actually are professional and you want to
work. You can become part of the GDG network –
Google developer groups, and become a leader. We will have a
series of talks from that. Eventually, if you feel,
especially all the womenfolk out there. Please raise your hands! Give them a big round of
applause, guys. You can all become a woman tech leader, tech
maker and a leader, as part of the programme, and for people
who spend a lot of time and build a lot of expertise and
want to share that knowledge, we have the platform called the
Google Developer Expert. We can deliver that. Finally, we have a
start-up programme, and you can become a launchpad member as
part of the programme. This is all these programmes across the
entire life cycle of a developer and you can connect with any of
these programmes or connect with me or the team to get more
details on this. With that, ladies and gentlemen, please
give a big round of applause for Christie to to come on.
CHRISTY: I’m Christy Anoop. I’m a science and engineering
student who spend most of – spends most of his time outside
the classroom. I help my friends and I mentor my juniors. This
is what I do. In this presentation, I will be going
through what we as leads do, our roles and responsibilities. The
developer student club programme is a massive network
of 190 developer student clubs all across the country. From
approximately 100 industries, these developers were
hand-picked by Google after rigorous rounds of interview
from 1,000-plus applications. We strongly believe these leads
will have a high impact in the society. Let me briefly go
through what we do. We are students. We learn and we keep
learning. We learn from the amazing material provided to us
from Google. Beautiful material. We share this knowledge through
workshops. These workshops are not only open for novice
developers but also for advanced developers trying to further
their scales. These workshops are custom-designed for the
audience that enters the workshops. By the end of every
workshops, we make sure to provide them material directly
from Google from which they can continue self-learning. And
trust me, we have received amazing feedback from these
workshops and will be provide more and more in the coming
months. Students have become technically strong, they know
how to code. They will help local businesses, organisations,
help run their businesses more efficiently, and help them make
nor money at the end of the day. This is one of the beauties of
the developer club network. All of this takes place through
communication. They go to the local businesses, find out what
they’re problems are and developing mobile and web
solutions for them. These are not big organisations or
businesses, just small businesses, and they come up
with simple solutions to help them. And, in this entire
journey, the developer student leads not only become
technically strong but also learn business analytical,
business communication, as well as soft skills and most
importantly, they learn real-world problem-solving
skills. How awesome is that. Students know everything. They
know the ins and outs of business and know how to
communicate well. They are more open to employability
opportunities. They are more open to freelance opportunities.
How awesome is is that. That’s amazing. This is one of the main
aims of the developer club network to help students make
them more employable. And finally, we would allow for you
to be part of the developer student club network. Please
contact the email already on the screen and you will be
contacted to your nearest developer student club in your
city where you can provide mentorship, training, and help
students solve real-world problems. We look forward to
seeing all of you on the campuses, helping the students
and organisationsing and helping our world being an amazing
place to live. Thank you.
>>Thanks for this passionate talk. I KARTHIK: What has been
one of the important moments for you managing this programme at
CMRI, using the fourth-year — what has been a memorable moment
for you? CHRISTY: This programme has
transformed me. I’m not the person I used to be. I used to
be a selfish person but after coming to the programme, II
changed into a new person. I won multiple awards but when
students who you thought in your workshops come back to you
after a few weeks, after a few days, with their own
application, you feel a sense of accomplishment.
This is the type of reward that I never felt before, so this is
what I love about the club.
KARTHIK: Give him another round of applause. Let’s go on to the
next very important programme, Google Developer Certification.
Before I call on JP, how many of you have gone to the
certification lounge? Wow. That’s awesome. Thank you very
much. We have a global lead for certification. JP and Kamal.
Take it away, JP.>>Hello, everyone. We’re here
talk about the developer certification. Who wants to hear
about developer certification? louder. Oooh! Developer
certification! Wooh! Right. I’m JP, I’m the programme
manager for developer certification, and we launched
developer certification at IO in 2016 with the purpose being to
help close a gap for employers who are looking for talented
developers, and developers who are looking for jobs. In this,
our mission has been to create performance-based certifications
that are industry standard and help developers gain recognition
and advance their careers. Since we launched the
programme, we have created associate Android developer as a
certification, Mobile Web Specialist as a certification,
and there’s more to come. We certified
approximately 1,700 developers in over 90 countries. 300 of
those developers are Indian developers. [Applause]. In
addition to the numbers, there’s also been some impact in terms
of promotions, jobs, career. A lot of them are
reporting new jobs, additional opportunities, speaking
opportunities. It is making an impact. Some of the testimonials
we’ve basically received speak to this, and there’s more. So
this is just a brief overview to set the stage for
certification. What I would like to do now is introduce Kamal
who is one of our first Android certified developers, and he’s
going, so to speak, about his experience with the
certification programme and what has happened since that time.
>>Hello, everyone. My name is Kamal. I am Google Certified
Android Developer, having experienced of five-plus years
in Android. I work as a stack analyst. I first heard about
this programme at IO, and I was super excited, because I have
found a way to test my skills. So I completed my certification
back in 2016. It has benefited me personally and
professionally. I was looking for a job in 2016 in Google
organisations before my certification. Once I completed
my certification, I got noticed, visibility, and got more job
calls. So it has helped in a way. If I talk about professionally, I’m a
professional engineer, subject, architect, and I’m working in
different programmes that includes reviews, app
management, and different activities in our organisation.
I’m getting more visibility in my current organisation due to
my certification as I’m the only certified guy in the region.
There are more benefits to certification as well. From an
organisational point of view, every organisation wants to hire
good talent, having good technical, sound knowledge under
their belt, right? So they’re using certification as a firster
nowadays to conduct quick interviews. If you’re a
certified de investment now, you will have an edge over this.
There are other benefits in the certification that I have got.
I want to share that. If you’re a social person, ever since I’ve
completed my certification, I’ve been called by many
universities, colleges, you know, students, to come over the
place and work as a trainer, to guide them, and tell them about
the certification programme, and how they can go about it,
and write stuff. So, for example, I’ve been called by … recently.
So, here’s the living. If you’re interested in this programme,
you can go with this link, and find us, and go see.
>>Please give JP and Kamal a round of applause. [Applause].
Please keep it goes across. Keep – stay tuned, and deep
track of what is happening in the certification world. Moving
on to the next talk, are you guys having fun? Is it
interesting? It’s breaking the monotony? Yes or no? Huh!
Yes? Yes! [Cheering]. Let’s call my next speaker who is from
the favourite part of the country called Jaypur because
he’s going to share his share and thoughts about building the
Google developer groups. Please welcome him been
ladies and gentlemen. [Applause].
>>Hello. >>All yours.
>>Hello, everyone. [Cheering]. One of the prime benefits of
being a member is you get a huge fan following! So, I’m Kikas,
the entire country is in search of Vikas these days. I’m the
organiser of GDGJaipur in Rajastan. When we talk about
GDG, I want to get to know how many are aware of the term “GDG”? Right, so many
of you. GDG gives you a platform where
you can talk, you can share, you can interact, and, of course,
you can have some good food!
So, if you talk about Google developers group, in terms of
the present all across the globe, we have 700-plus community groups in 107
countries, with 300,000 people associated with these groups,
and it’s huge. It is huge. And it’s supported by 2,000
organisers all across the globe, and, if you talk about Indian
context, so what we have here is we have 21 chapters which is
supported by developers, some of the developers are sitting in
this audience as well and we are able to read 75,000-plus
people. If you want to know more about
this Google developers group, or if you want to open a chapter
in your city, or in your region, you can visit meetup
dominate/pro/gdg. All the activities which are boxed under
this Google umbrella are absolutely free, and of course,
I would like to see you in the future events which will be
organised by these chapters. If you want to know nor, get more
insight about the Google developers group within you can
interact with some of the awesome GDG organisers who are
sitting in the community lounge, some of the people are sitting
here only, and you can reach out to me as well. I hope to see
you soon. Thank you. [Cheering]
>>So what you forgot to tell us is that 75,000, 300,000 from
India, please give yourself a big round of applause. There’s a
big chunk of community out here in this great nation called
India. Thank you very much. Keep that going. Because most
important thing is that, so you’ve been serving the
community for how long now? >>Since last five years.
>>Please give another round of applause. Five years!
>>Thank you. >>Five years doing volunteer
work is big, big deal. Thank you very much. So what is is the
one thing that comes across in the last five years which you
want to share with the audience? we collect the feedback. We get
the inputs from the audience, the attendees,
and even if a single attendee say that the event was useful,
we get a satisfactory smile on our face, and that keeps us
motivated. >>Thank you very much, ladies
and gentlemen. Vikas for you. Please give him another round of
applause, guys, come on. Give it! Thank you. Okay, so now I
have the great honour and privilege of calling on to stage
a fourth year computer science student. She also happens to be
the women technical lead, and these been doing an awesome job
making sure that a lot of women getting into the part of the WTM
community out there. Let’s call and invite her on stage, guys, please. [Applause]. [Cheering].
>>Thank you, Karthik. To begin with, the WTM programme
basically provides an ecosystem which is built on visibility,
community, and resources for women tech enthusiasts around
the globe. Visibility means that it gives us a platform to
showcase our own models and the achievements of women
technologists. Community is a human component which biffly
helps us in building networks, sharing our knowledge, and
motivation. The women tech makers’ programme also provides
us with resources which have been career development and
building one’s technical skill set so that the women can become
industry leaders. The women tech chapters across the various
cities set out to build road maps so accomplish our mission,
and our mission is to proliferate the participation of
women in technology. Back in 2012, Megan Smith started this
as part of Google IO, and, today, it is led by a global
team of Googlers and headed by Natalie Villalobos. With over
4,350 participants and members, and 27 active chapters in India,
women tech makers assume seamless functionality and
co-ordination, and the numbers keep growing by the day. Now,
let me talk a little bit about the initiatives. Our in-house
activities include meet-ups, speaker sessions, hands-on
sessions, workshops, lightning talks, and mentorship
programmes. Some of the communities also promote
progressive learning via collaborative projects to help
our members step into open-source ventures. The women
tech makers’ programme this year actually is, you know, it is
glad to announce that this year, there’s an overall 25 per cent
women participation in the season 2000 – in the Dev Fest
season 2017. When it comes to the global programmes, the tech
makers’ group encourages women to participate in signature
events like IWD, the India conference, and the women tech
makers’ lead summit. Problem gating – problem gating the –
the WTM scholars’ programme selects and imparts individuals
who show an excellent academic performance, outreach
contribution and have technical knowledge. Women technical has
also partnered with Audacity to provide – Udacity to provide
courses for women in technology. Now, I’ve spoken about the WDM
story, and I would request all the women – and the men,
everyone in the room, to engage with us, our social media and
Twitter handle is womentechmakers. You can join
the movement as joining as a member at
the benefits of beak a member is that a global team actually
curates these resources and the content is emailed to you. So
for the experienced woman, there is a separate programme, and
they can actually register influencers where they mentor
start-ups, and they mentor individuals based on their
expertise. Do join the movement that is centred around
inclusion, diversity, and equality – and equal
opportunity. Thank you. [Applause].
>>Ladies and gentlemen, please give her a big round of
applause. A quick question for you, what’s been one of the
biggest challenges for you as being a WTM lead?
>>Okay, thank you, Karthik. One of the biggest challenges for
me is my chapter is a college-centric chapter. So,
designing events which cater to the needs of all the four years
of college starting from the freshers to the final-year
students, it has been the biggest challenge for me because
there’s a huge variation in technical skill set over the
four years of college. Having said that, another smaller
challenge was the scheduling of events because of the college
timetable clashes and different study hours. We come from a
residential college, so I think these are the challenges.
>>Thank you. You’re doing a great job in spite of these
challenges. Thank you very much. Another thing is that thank you
for you and all the WTM leads out there who are making it one
of the most diverse tech conferences.
>>Do join us! >>Thank you very much.
[Applause]. So, before I get carried away, I know I have a
lot of time pressure, so my great friend, who has been my
partner in crime and a lot of these events which we did
together for the last six months in 15 cities, over to the
Google developer expert giving a lot of his knowledge and
sharing stuff with developers. Over to you.>>Hi, everyone. I’m a Google
expert developer for user experience and I’m here to talk
about the experts’ programme. Google developer experts is
basically a global network of experts in these three focused
areas. The first focused area is technology which includes
technologies like Android, web, cloud, and so on. The second focus area is design
which includes the disciplines of product design, design
strategy, and so on. The third focus R focus area is growth and
monetisation which includes marketing, branding,
distribution, and so on. So, to be an expert, you have to an
expert in some of these areas, and you have to undergo a
thorough evaluation process. Currently will be there are 450
experts in the world and just 13 are from India. I believe this
is not truly representative of the amount of talent we have in
the country right now, right? So I really look forward to
having more experts from India join the programme very soon. So
what do Google experts do? The short answer to that question
is Google developer experts contribute to the community in a
very positive way. This could be in one of many ways. For
example, I have friends who are experts who write great blog
posts and their posts have lots of views and followers. Other
experts contribute open source in a very, very prolific way.
They have lots of followers and stars on GitHub. Other followers
mentor start-ups. More recently, assisting experts in
live streams. That’s an interesting trend. Experts are
all about helping people and solving problems. In case you’re
interested in learning more about the experts’ programme,
please visit this URL. Feel free to reach out to me or any other
experts in the room if you have any questions. >>Thank you. What has a been
your learning after helping so many start-ups, Android, UX,
building apps. What is the one thing that struck your mind?
>>So, before joining the programme, and before joining
Software India, I was able to contribute to my community here
in bang door, but after the initiative, able to travel to
various cities in India, and I think I’ve been able to make an
impact in the lives of designers and developers across India
which is something incredible and I’m very thankful for the
opportunity. >>Thanks very much.
[Applause]. We’re going to get to the last part of the light
nick talk. We have the last speaker. This person has
been involved in the mentorship programme for many years. He has
spent thousands of hours mentoring start-ups across the
various formats of launchpad. Let’s invite on stage Srinath.
Come on up.
♪ >>Thanks. Good evening,
everyone. I am a product management consultant. I used to
ahead product management for maps and then I quit. Then I
used to write a lot in product management a LinkedIn. Someone
once saw me and said they are going to start a whole series of
programmes called launchpad and would I like to come and be a
member there. What really is launchpad. It is a programme
that Google put together for start-ups to help them with
various – and realising that started-ups have different
challenges early on and later, the programme is structured
differently for start-ups as well as most start – for
early-state startups you have the launchpad build which is a
two-day programme. It largely helps start-ups come back,
reflect on a lot of things around how they are building
their products. How they’re doing their marketing. And
brain-storm with a bunch of Googlers as well as industry
experts across product, UX, marketing, or technology. The latest nest
start-ups, more mature start-ups, there’s a launchpad
accelerator which starts with a two-week intense sitdown where
you sit down with people and do the same exercise over and over
again and walk away with the clear targets of what you’re
going to do in the next six months. What’s in it for the
start-ups? Launchpad offers a great way for start-ups to work
with the best mentors across the world from Google and
elsewhere. Much of this is customised to be the live stage
of the start-up. At the early stage, you’re saying how do I
grow and build a market share? Later on, you end up having
queries of is in the right team structure. Do I want to move to
a different geography? Experts lets you bring it with not just
experts but other start-ups as well. Many of us have worked
across geographies, so we are able to sit down and come back
and say some other country, some other start-up saw it another
way, so why don’t you learn? There’s a great opportunity for
cross-pollation. The impact of the programme, there’s been over
500 mentorship hours, and rated very highly by start-ups. The
launchpad accelerator, which is a six-month programme,
start-ups there have raised 62 million of fund pages, Google.
It has been a great learning experience as a mentor because
we get to see very diverse start-ups and it’s a great connect-the-dot
opportunities, working with the different communities at other
mentors and seeing what are the different perspectives they
bring? How does a U X-Mennor look at it versus how does a
marketing men for look at it. Just synthesise all of this and
help the start-ups. For me, it’s also been a great way of giving
back, helping other see clearer by being a third-party view to
reflect on the problem that they have. So, if any of you have
start-ups, I would encourage you to try out and apply at this
point. The Twitter handle is here. Thank you.
>>Thank you. Before you go, so, you’ve been spending so many
hours mentoring start-ups for maybe four years now?
>>Yes, three or four years now.>>What is the one thing that
has struck you in all those four years of being associated at
launchpad and mentoring start-up?
>>I’ve done mentoring sessions in Istanbul, start-ups from
across multiple countries. I remember at this one time, we
were sitting with a start-up that was working for help in
Mexico, so they were trying to build a community of house help
and a marketplace for them. We were sitting and brainstorming
about the whole thing, and they were looking at elements of
saying how do you make sure we get the first visit from house
help? We started brainstorming around this and having the
emotional value of having someone trusted at home is
higher than anything else. That brought about the way the entire
shift of the way of the problem.
Helping them reflect on the problem and see it differently
has been a big thing. >>Thanks for all your
significant contribution. Keep it going. Thank you very much.
With that, before I thank you all, I want to quickly wrap up
the session, so we heard about six lightning talks. There are
six gorgeous programmes from the Google Developer Group. What’s the first one? Developer
Student Club. If you’re in college, a student, just
participate in the Developer Student Club, enrol, meet your
leader and start contribute the to the community right away. If
you’re in college, working, want to get certified like Android
to start with, and PWA at some point in time, Google Club,
please use Kamal as an example and get certified. If you’re a
professional developer already engaged in working with a
start-up or enterprise, please join the Google developer
groups, a vibrant community which always believes in sharing
and gaining a lot from that by sharing, become a member right
away. Then for all the gorgeous womenfolk out there, you get an
opportunity to, a WTM lead, start your own stuff within the
GDD chapter of getting more women to participate in
technology. And then, the Google developer expert which saw was
talked about, if you want to share expertise, that’s a great
platform, and finally, we had launchpad, if you’re good at
certainly skills and want to mentor and give back to the
community, that’s the launchpad programme. That’s the six
flagship programmes from the developer group. Me and the rest
of the team are there if you guys still have the energy and
want to keep us engaged, happy to share more information, more
knowledge, and see how we can get this going. With that, thank
you very much. [Applause]. So now I’m
shifting gears here. So now, I’m going to do the role on behalf
of the GDD core team and the boss of the GDD core team who is
sitting right here, to thank you all for being such an
awesome attendees and audience, and really making it very, very
great. [Applause] It’s very encouraging and exciting for all
of us out here from the GDD India core team to bring this
event to you. I want to thank you all for being an awesome
audience. Please give yourselves a big round of applause, guys.
Thank you. So a couple of small things before we actually close.
Please check out your inboxes. You will have got an email in
your inbox. We value your feedback a lot. We want you to
give your feedback so that we can keep improving and make
things better. So, if you like something, please let us know.
If you don’t like something, please let us know. Either way,
please just let us know, so we really, really want you to give
us the feedback so we can make things of more better, and answer yourmuch more better, and
answer your experience. Share with the community, do some
social media updates, share the life streams. There are a lot of
videos out there on the YouTube channel. Let the content reach
other folks. If you like it, please like it and share it with
the rest of the other folks. People within the community, or
you want to join the community and want to get GDD extended,
please check out the GDD India website. There are a lot of ways
for you to access and do your own GDD events within your
community, so we want it to spread, and this is available
until February, so a lot of stuff available for you to host
the events. And then finally, my favourite topic, away we as the
core team have been working on is to make sure this conference
is diverse. I want to give yourselves a big round of
applause for the women and men making sure it is a diverse
event. Thank you very much for that. [Applause]. Then we had a
lot of people attend GDD India, not only from Bangalore. It has
been a global event, which is a great surprise. We have a lot
of people from outside Bangalore across India. We have a God
representation, and a lot of people from outside India. There
are many countries – I don’t know the exact numbers – we will
accomplish it at some point in time. Please give them another
round of applause. It is great to have you again, and again,
and again. Thank you very much. With that, let’s enjoy a brief
video of what the exciting times we had together over the last
few days. Let’s roll the video, please.
>>This is the first GDD ever in India, and this is the largest
developer event in India ever. Jo living, breathing, eating and
sleeping and writing. It is the first time the event is
happening in Bangalore. Many people here see – that’s a different kind of feeling.
>>Our purpose being here is to know what is next. As the
future, it is going to change things. We want to make our
developers embrace those technologies. Programme
something like math. You need to practise a lot. Jo the session
in the morning was about design. It was new to me, and that’s how
I can improvise my work foreign minister we think #
[Cheering and Applause]. Okay.
For one last time, on behalf of the GDD India core team, thank
you all very much, and see you all next year!