New Course: Android Design for Developers

Posted by Nick Butcher, pixel pusher

What makes an app intuitive and easy to use? What makes it hard or frustrating? How can your app stand out in a competitive market? Learn the fundamentals of good Android design and the patterns that have proven to work on Android to help you to build better apps.

This 5-lesson series, available on Udacity, begins with a crash course on the fundamentals of Android UI design. It helps you to sort your DIPs from your pixels, to pick the right layouts and navigation structures and shows you how to style your app to match your brand. The rest of the course is a deep dive into the principles and implementation of material design to show you how to build beautiful consistent experiences that are right at home on Android.

Lesson 2 dives into the concept of tangible surfaces, and how they establish hierarchy to make your UI more understandable. Lesson 3 looks at applying bold graphic design, or how the principles of space, color, typography and imagery help you to create a beautiful, branded experience. Lesson 4 studies the use of meaningful motion to bring your apps to life and create a seamless and more intuitive experience. Finally, lesson 5 shows how adaptive design makes your app shine on any screen size.

This course is aimed at developers familiar with Android who want to boost their design skills or designers who want to understand more about the platform they’re creating for. The full course is available for free or you can enroll in Udacity’s Android Nanodegree for extra help and support. So sign up for the Android design for developers course and go build something brilliant!

I’m gonna revolutionize how we store babies

I'm gonna revolutionize how we store babies

View

Chrome custom tabs smooth the transition between apps and the web

Originally posted on the Chromium blog

Posted by Yusuf Ozuysal, Chief Tab Customizer

Android app developers face a difficult tradeoff when it comes to showing web content in their Android app. Opening links in the browser is familiar for users and easy to implement, but results in a heavy-weight transition between the app and the web. You can get more granular control by building a custom browsing experience on top of Android’s WebView, but at the cost of more technical complexity and an unfamiliar browsing experience for users. A new feature in the most recent version of Chrome called custom tabs addresses this tradeoff by allowing an app to customize how Chrome looks and feels, making the transition from app to web content fast and seamless.

Chrome custom tabs with pre-loading vs. Chrome and WebView

Chrome custom tabs allow an app to provide a fast, integrated, and familiar web experience for users. Custom tabs are optimized to load faster than WebViews and traditional methods of launching Chrome. As shown above, apps can pre-load pages in the background so they appear to load nearly instantly when the user navigates to them. Apps can also customize the look and feel of Chrome to match their app by changing the toolbar color, adjusting the transition animations, and even adding custom actions to the toolbar so users can perform app-specific actions directly from the custom tab.

Custom tabs benefit from Chrome’s advanced security features, including its multi-process architecture and robust permissions model. They use the same cookie jar as Chrome, allowing a familiar browsing experience while keeping users’ information safe. For example, if a user has signed in to a website in Chrome, they will also be signed in if they visit the same site in a custom tab. Other features that help users browse the web, like saved passwords, autofill, Tap to Search, and Sync, are also available in custom tabs.

Custom tabs are easy for developers to integrate into their app by tweaking a few parameters of their existing VIEW intents. Basic integrations require only a few extra lines of code, and a support library makes more complex integrations easy to accomplish, too. Since custom tabs is a feature of Chrome, it’s available on any version of Android where recent versions of Chrome are available.

Users will begin to experience custom tabs in the coming weeks in Feedly, The Guardian, Medium, Player.fm, Skyscanner, Stack Overflow, Tumblr, and Twitter, with more coming soon. To get started integrating custom tabs into your own application, check out the developer guide.

A cat’s reaction to a laser pointer

A cat's reaction to a laser pointer

Cat and Dog versus a laser pointer

View

Games developer, Dots, share their Do’s and Don’ts for improving your visibility on Google Play

Posted by Lily Sheringham, Developer Marketing at Google Play

Editor’s note: A few weeks ago we shared some tips from game developer, Seriously, on how they’ve been using notifications successfully to drive ongoing engagement. This week, we’re sharing tips from Christian Calderon at US game developer, Dots, on how to successfully optimize your Play Store Listing. -Ed.

A well thought-out Google Play store listing can significantly improve the discoverability of your app or game and drive installations. With the recent launch of Store Listing Experiments on the Google Play Developer Console, you can now conduct A/B tests on the text and graphics of your store listing page and use the data to make more informed decisions.

Dots is a US-founded game developer which released the popular game, Dots, and its addictive sequel, TwoDots. Dots used its store listings to showcase its brands and improve conversions by letting players know what to expect.

Christian Calderon, Head of Marketing for Dots, shared his top tips with us on store listings and visibility on Google Play.

Do’s and Don’ts for optimizing store listings on Google Play

Do’s

Don’ts

Do be creative and unique with the icon. Try to visually convince the user that your product is interesting and in alignment with what they are looking for.

Don’t spam keywords in your app title. Keep the title short, original and thoughtful and keep your brand in mind when representing your product offering.

Do remember to quickly respond to reviews and implement a scalable strategy to incorporate feedback into your product offering. App ratings are important social proof that your product is well liked.

Don’t overload the ‘short description’. Keep it concise. It should be used as a call-to-action to address your product’s core value proposition and invite the user to install the application. Remember to consider SEO best practices.

Do invest in a strong overall paid and organic acquisition strategy. More downloads will make your product seem more credible to users, increasing the likeliness that a user will install your app.

Don’t overuse text in your screenshots. They should create a visual narrative for what’s in your game and help users visualize your product offering, using localization where possible.

Do link your Google Play store listing to your website, social media accounts, press releases and any of your consumer-facing channels that may drive organic visibility to your target market. This can impact your search positioning.

Don’t have a negative, too short or confusing message in your “What’s New” copy. Let users know what updates, product changes or bug fixes have been implemented in new versions. Keep your copy buoyant, informative, concise and clear.

Do use Video Visualization to narrate the core value proposition. For TwoDots, our highest converting videos consist of gameplay, showcasing features and events within the game that let the player know exactly what to expect.

Don’t flood the user with information in the page description. Keep the body of the page description organized and concise and test different structural patterns that works best for you and your product!

Use Google Play Store Listing Experiments to increase your installs

As part of the 100 Days of Google Dev video series, Kobi Glick from the Google Play team explains how to test different graphics and text on your app or game’s Play Store listing to increase conversions using the new Store Listing Experiments feature in the Developer Console.

Find out more about using Store Listing Experiments to turn more of your visits into installs.

Announcing the Android Auto Desktop Head Unit

Posted by Josh Gordon, Developer Advocate

Today we’re releasing the Desktop Head Unit (DHU), a new testing tool for Android Auto developers. The DHU enables your workstation to act as an Android Auto head unit that emulates the in-car experience for testing purposes. Once you’ve installed the DHU, you can test your Android Auto apps by connecting your phone and workstation via USB. Your phone will behave as if it’s connected to a car. Your app is displayed on the workstation, the same as it’s displayed on a car.


The DHU runs on your workstation. Your phone runs the Android Auto companion app.


Now you can test pre-released versions of your app in a production-like environment, without having to work from your car. With the release of the DHU, the previous simulators are deprecated, but will be supported for a short period prior to being officially removed.

Getting started

You’ll need an Android phone running Lollipop or higher, with the Android Auto companion app installed. Compile your Auto app and install it on your phone.

Install the DHU

Install the DHU on your workstation by opening the SDK Manager and downloading it from Extras > Android Auto Desktop Head Unit emulator. The DHU will be installed in the <sdk>/extras/google/auto/ directory.

Running the DHU

Be sure your phone and workstation are connected via USB.

  1. Enable Android Auto developer mode by starting the Android Auto companion app and tapping on the header image 10 times. This is a one-time step.
  2. Start the head unit server in the companion app by clicking on the context menu, and selecting “Start head unit server”. This option only appears after developer mode is enabled. A notification appears to show the server is running.

  3. Start the head unit server in the Android Auto companion app before starting the DHU on your workstation. You’ll see a notification when the head unit server is running.

  4. On your workstation, set up port forwarding using ADB to allow the DHU to connect to the head unit server running on your phone. Open a terminal and type adb forward tcp:5277 tcp:5277. Don’t forget this step!
  5. Start the DHU.

      cd <sdk>/extras/google/auto/

      On Linux or OSX: ./desktop-head-unit

      On Windows, desktop-head-unit.exe

At this point the DHU will launch on your workstation, and your phone will enter Android Auto mode. Check out the developer guide for more info. We hope you enjoy using the DHU!

Building better apps with Runtime Permissions

Posted by Ian Lake, Developer Advocate

Android devices do a lot, whether it is taking pictures, getting directions or making phone calls. With all of this functionality comes a large amount of very sensitive user data including contacts, calendar appointments, current location, and more. This sensitive information is protected by permissions, which each app must have before being able to access the data. Android 6.0 Marshmallow introduces one of the largest changes to the permissions model with the addition of runtime permissions, a new permission model that replaces the existing install time permissions model when you target API 23 and the app is running on an Android 6.0+ device.

Runtime permissions give your app the ability to control when and with what context you’ll ask for permissions. This means that users installing your app from Google Play will not be required to accept a list of permissions before installing your app, making it easy for users to get directly into your app. It also means that if your app adds new permissions, app updates will not be blocked until the user accepts the new permissions. Instead, your app can ask for the newly added runtime permissions as needed.

Finding the right time to ask for runtime permissions has an important impact on your app’s user experience. We’ve gathered a number of design patterns in our new Permission design guidelines including best practices around when to request permissions, how to explain why permissions are needed, and how to handle permissions being denied.

Ask up front for permissions that are obvious

In many cases, you can avoid permissions altogether by using the existing intents system to utilize other existing specialized apps rather than building a full experience within your app. An example of this is using ACTION_IMAGE_CAPTURE to start an existing camera app the user is familiar with rather than building your own camera experience. Learn more about permissions versus intents.

However, if you do need a runtime permission, there’s a number of tools to help you. Checking for whether your app has a permission is possible with ContextCompat.checkSelfPermission() (available as part of revision 23 of the support-v4 library for backward compatibility) and requesting permissions can be done with requestPermissions(), bringing up the system controlled permissions dialog to allow the user to grant you the requested permission(s) if you don’t already have them. Keep in mind that users can revoke permissions at any time through the system settings so you should always check permissions every time.

A special note should be made around shouldShowRequestPermissionRationale(). This method returns true if the user has denied your permission request at least once yet have not selected the ‘Don’t ask again’ option (which appears the second or later time the permission dialog appears). This gives you an opportunity to provide additional education around the feature and why you need the given permission. Learn more about explaining why the app needs permissions.

Read through the design guidelines and our developer guide for all of the details in getting your app ready for Android 6.0 and runtime permissions. Making it easy to install your app and providing context around accessing user’s sensitive data are key changes you can make to build better apps.

Real-Time Data Validation with Google Tag Assistant Recordings

We’ve said it before and we’ll say it again: great analytics can only happen with great data.  
That’s why we’ve made it a priority to help our users confirm that their data is top-quality. Last year we released our automated data diagnostics feature, and now we’re proud to announce the launch of another powerful new feature: Google Tag Assistant Recordings.  
This tool helps you instantly validate your Google Analytics or Google Analytics Premium implementation. If it finds data quality issues, it helps you troubleshoot them and then recheck them on the spot.  It’s available as part of the Google Tag Assistant Chrome Extension.

Screen Shot 2015-07-21 at 2.27.31 PM.png
“Tag Assistant Recordings is fast becoming one of my favorite tools for debugging Google Analytics Premium installations!  I use it multiple times a day with my Premium clients to help explain odd trends in their data or debug configuration issues. Already I’m building it into my core workflow.” 
- Dan Rowe, Director of Analytics at Analytics Pros
What can I use it for?
Tag Assistant Recordings works with all kinds of data events: purchases, logins, and so on. What if you sell flowers online and want to confirm that Enhanced Ecommerce is capturing the checkout flow correctly? With Tag Assistant Recordings, you can record yourself going through the checkout process as you buy a dozen red roses, and then review what Google Analytics captured.

If you find that your account isn’t set up properly — if the sale wasn’t recorded or was mis-labeled — you can make adjustments and test it all over again instantly.  With Tag Assistant Recordings, you know you’re capturing all the data that’s important to you.
Tag Assistant Recordings can be particularly useful when (1) you’re in the process of implementing Google Analytics or Google Analytics Premium, (2) you’ve recently made updates to your site, or (3) you’re making changes to your Google Analytics or Google Analytics Premium configuration. It works even if your new site or your updates aren’t visible to the public yet, so you can feel confident before you go live.
Tag Assistant Recordings can also help if you want to reconfigure your Google Analytics account to better reflect your business.  For example, you may want to configure multi-channel funnels to detect your AdWords channel.  Tag Assistant Recordings lets you set up this new functionality in Google Analytics and test immediately whether everything is working as you expect.  
“Tag Assistant Recordings has already been a HUGE help! Analytics Pros and About.com were working on an issue with sessions double-counting and Tag Assistant Recordings let us narrow down precisely which hits were having new sessions counted. It saved us hours of time and helped us jump right to where the problem was. So, in summary, this is awesome!”  

- Greg McDonald, Business Intelligence Analyst at About.com
How does it work?
Tag Assistant Recordings works through the Google Tag Assistant Chrome Extension, so you’ll need to download the extension if you aren’t already using it.  From there, setup is easy.  Simply open Google Tag Assistant, record the user flow you’d like to check, and then view the full report in Tag Assistant.  You’ll want to view both tabs in the report (Tag Assistant and Google Analytics) to verify that you see the intended tags.  Keep in mind that the Google Analytics data is only available if you have access to the appropriate property or view.

Tag Recordings Gif.gif
Here’s a nifty bonus: If you find a problem, and you think you have fixed it by changing settings from within Google Analytics, return to the Google Analytics tab in Tag Assistant Recordings and click the “Update” button. You’ll see instantly how your configuration changes would have affected this recording.
We hope that Google Tag Assistant will be a valuable new tool in your analytics toolkit.  
Why not start using it today?


Posted by:  Ajay Nainani, Frank Kieviet, and Jocelyn Whittenburg, Google Analytics team

Get the Do’s and Don’ts for Notifications from Game Developer Seriously

Posted by Lily Sheringham, Developer Marketing at Google Play

Editor’s note: We’ve been talking to developers to find out how they’ve been achieving success on Google Play. We recently spoke to Reko Ukko at Finnish mobile game developer, Seriously, to find out how to successfully use Notifications.

Notifications on Android let you send timely, relevant, and actionable information to your users’ devices. When used correctly, notifications can increase the value of your app or game and drive ongoing engagement.

Seriously is a Finnish mobile game developer focused on creating entertaining games with quality user experiences. They use push notifications to drive engagement with their players, such as helping players progress to the next level when they’ve left the app after getting stuck.

Reko Ukko, VP of Game Design at Seriously, shared his tips with us on how to use notifications to increase the value of your game and drive ongoing engagement.

Do’s and don’ts for successful game notifications

Do’s

Don’ts

Do let the user get familiar with your service and its benefits before asking for permission to send notifications.

Don’t treat your users as if they’re all the same – identify and group them so you can push notifications that are relevant to their actions within your app.

Do include actionable context. If it looks like a player is stuck on a level, send them a tip to encourage action.

Don’t spam push notifications or interrupt game play. Get an understanding of the right frequency for your audience to fit the game.

Do consider re-activation. If the player thoroughly completes a game loop and could be interested in playing again, think about using a notification. Look at timing this shortly after the player exits the game.

Don’t just target players at all hours of the day. Choose moments when players typically play games – early morning commutes, lunch breaks, the end of the work day, and in the evening before sleeping. Take time zones into account.

Do deep link from the notification to where the user expects to go to based on the message. For example. if the notification is about “do action X in the game now to win”, link to where that action can take place.

Don’t forget to expire the notifications if they’re time-limited or associated with an event. You can also recycle the same notification ID to avoid stacking notifications for the user.

Do try to make an emotional connection with the player by reflecting the style, characters, and atmosphere of your game in the notification. If the player is emotionally connected to your game, they’ll appreciate your notifications and be more likely to engage.

Don’t leave notifications up to guess work. Experiment with A/B testing and iterate to compare how different notifications affect engagement and user behavior in your app. Go beyond measuring app opening metrics – identify and respond to user behavior.

Experiment with notifications yourself to understand what’s best for your players and your game. You can power your own notifications with Google Cloud Messaging, which is free, cross platform, reliable, and thoughtful about battery usage. Find out more about developing Notifications on Android.

Affiliate Attribution: Putting the Pieces Together

Originally Posted on the Adometry M2R Blog

Recently I was reminded of an article from a little while back, titled, “2013: The Year of Affiliate Attribution?” It’s an interesting take and worthwhile read for those interested in affiliate marketing and the associated measurement challenges. Given that some time has passed, I thought it would be interesting to take a look at progress to date towards realizing a more holistic and accurate view of affiliate performance as part of a comprehensive cross-channel strategy.

Most affiliate managers have a similar goal to manage affiliate holistically, meaning investing in those that predominantly drive net-new customers independent of other paid marketing investments. Ultimately, this model allows them to optimize CPA by managing commissions, coupon discounts, and brand appropriateness based on true “incremental value” provided to business. Unfortunately, due to a lack of transparency and inadequate measurement, many marketers find themselves short of this goal. The result is the ongoing nagging question, “Is my affiliate strategy working and am I overpaying for what I’m getting?”

Why ‘Affiliate Attribution’ Is Hard

Affiliate marketers’ challenges range from competing against affiliates in PPC ad programs to concerns about questionable business practices employed by some “opportunistic” affiliates offering marginal value, but still receiving credit for sales that likely would have happened regardless. Which brings us to the central question:

How do marketers determine how much credit an affiliate should receive?

As you may know, opinions about how much conversion credit affiliates deserve for any given transaction vary widely. While there are a number of factors that influence affiliate performance (e.g. where they appear in the sales funnel, industry/sector, time-to-purchase length, etc.) for most brands the attribution model that is utilized will have a significant impact on which affiliates are over- and under-valued.

For example, in a last-click world affiliates that enter the purchase path towards the bottom of the funnel often hold their own; yet, when brands begin measuring on a full-funnel basis incorporating impression data, many struggle to prove their incremental value as the consumer has many exposures to marketing long before they reach the affiliate site. Conversely, affiliates that act predominantly as top- or mid-funnel (content, loyalty, etc.) are usually undervalued using last-click but can garner more credit using a full-funnel, data-driven attribution methodology. I should also mention these are broad generalizations only meant as examples, and it’s not necessarily a zero-sum game.
Another challenge is that fractional, data-driven attribution is difficult to implement for some types of promotions. One instance of this is cash back, loyalty and reward sites that must know an exact commission amount they will receive for each transaction so that they can pass on discounts to members. Given the complexity of more sophisticated attribution models, this data isn’t readily available.
Lastly, there several organizational challenges that inhibit the use of data-driven attribution among affiliate marketers. Some industry experts have indicated that many publishers, as much as 70-80%, strip impression tracking code from affiliate URLs. Another measurement challenge we see frequently is brands managing affiliates at the channel level leaving little sub-channel categorization which is where significant optimization opportunities exist.
Affiliate Attribution and the Performance Marketing Goldmine
Of course, part of our work at Adometry is helping customers address these challenges (and more) to ensure they are measuring affiliate contributions accurately and able to take appropriate action based on fully-attributed results.
Some key advantages of using data-driven attribution to measure affiliate sales include:
  • The ability to create a unified framework to compare performance (clicks and Impressions) in which affiliates compete for budgets on equal footing,
  • Increased visibility into which publishers are truly driving net-new customers through specifying which are an integral part of a multi-touch path and which are expendable,
  • The knowledge required to implement a Publisher category taxonomy to allow more insights into how different types of publishers perform by funnel stage and areas to improve efficiency,
  • Insight into the true incremental value publishers are providing and the offering commission rates to reflect this actual value,
  • A better understanding of affiliate’s role in the overall mix, further informing marketers use of complementary tactics to maximize affiliate contributions in concert with other channels,
  • The ability to use actual performance data to counter myths and frustrations with affiliates (cookie stuffing, stealing conversions, etc.)
Taken separately, each of these represents a significant opportunity to both be more effective in how you identify and utilize affiliate attribution to drive new opportunities. Together, they represent a fundamental improvement in how you manage your overall marketing spending, strategic planning and optimization efforts.
Top-performing affiliates, particularly those at the top and middle of the funnel, also stand to benefit from more transparent, accurate and fair system for crediting conversions. In fact, several large-scale, forward-thinking affiliates are already investing in data-driven attribution to arm themselves with the data required to effectively compete and win business in the market as brands become more sophisticated and judicious with their affiliates budgets.
It’s an exciting time for performance marketing. Change is always hard, but in this case it’s absolutely change for the better.  And frankly, its time.  What are your thoughts and experiences with measuring affiliate performance and attribution?

Posted by Casey Carey, Google Analytics team

Hungry for some Big Android BBQ?

Posted by Colt McAnlis, Head Performance Wrangler

The Big Android BBQ (BABBQ) is almost here and Google Developers will be there serving up a healthy portion of best practices for Android development and performance! BABBQ will be held at the Hurst Convention Center in Dallas/Ft.Worth, Texas on October 22-23, 2015.

We also have some great news! If you sign up for the event through August 25th, you will get 25% off when you use the promotional code “ANDROIDDEV25″. You can also click here to use the discount.

Now, sit back, and enjoy this video of some Android cowfolk preparing for this year’s BBQ!

The Big Android BBQ is an Android combo meal with a healthy serving of everything ranging from the basics, to advanced technical dives, and best practices for developers smothered in a sweet sauce of a close knit community.

This year, we are packing in an unhealthy amount of Android Performance Patterns, followed up with the latest and greatest techniques and APIs from the Android 6.0 Marshmallow release. It’s all rounded out with code labs to let you get hands-on learning. To super-size your meal, Android Developer instructors from Udacity will be on-site to guide users through the Android Nanodegree. (Kinda like a personal-waiter at an all-you-can-learn buffet).

Also, come watch Colt McAnlis defend his BABBQ “Speechless” Crown against Silicon Valley reigning champ Chet Haase. It’ll be a fist fight of humor in the heart of Texas!

You can get your tickets here, and we look forward to seeing you in October!

Interactive watch faces with the latest Android Wear update

Posted by Wayne Piekarski, Developer Advocate

The Android Wear team is rolling out a new update that includes support for interactive watch faces. Now, you can detect taps on the watch face to provide information quickly, without having to open an app. This gives you new opportunities to make your watch face more engaging and interesting. For example, in this animation for the Pujie Black watch face, you can see that just touching the calendar indicator quickly changes the watch face to show the agenda for the day, making the watch face more helpful and engaging.

Interactive watch face API

The first step in building an interactive watch face is to update your build.gradle to use version 1.3.0 of the Wearable Support library. Then, you enable interactive watch faces in your watch face style using setAcceptsTapEvents(true):

setWatchFaceStyle(new WatchFaceStyle.Builder(mService)
    .setAcceptsTapEvents(true)
    // other style customizations
    .build());

To receive taps, you can override the following method:

@Override
public void onTapCommand(int tapType, int x, int y, long eventTime) { }

You will receive events TAP_TYPE_TOUCH when the user initially taps on the screen, TAP_TYPE_TAP when the user releases their finger, and TAP_TYPE_TOUCH_CANCEL if the user moves their finger while touching the screen. The events will contain (x,y) coordinates of where the touch event occurred. You should note that other interactions such as swipes and long presses are reserved for use by the Android Wear system user interface.

And that’s it! Adding interaction to your existing watch faces is really easy with just a few extra lines of code. We have updated the WatchFace sample to show a complete implementation, and design and development documentation describing the API in detail.

Wi-Fi added to LG G Watch R

This release also brings Wi-Fi support to the LG G Watch R. Wi-Fi support is already available in many Android Wear watches and allows the watch to communicate with the companion phone without requiring a direct Bluetooth connection. So, you can leave your phone at home, and as long as you have Wi-Fi, you can use your watch to receive notifications, send messages, make notes, or ask Google a question. As a developer, you should ensure that you use the Data API to abstract away your communications, so that your application will work on any kind of Android Wear watch, even those without Wi-Fi.

Updates to existing watches

This update to Android Wear will roll out via an over-the-air (OTA) update to all Android Wear watches over the coming weeks. The wearable support library version 1.3 provides the implementation for touch interactions, and is designed to continue working on devices which have not been updated. However, the touch support will only work on updated devices, so you should wait to update your apps on Google Play until the OTA rollout is complete, which we’ll announce on the Android Wear Developers Google+ community. If you want to release immediately but check if touch interactions are available, you can use this code snippet:

PackageInfo packageInfo = PackageManager.getPackageInfo("com.google.android.wearable.app", 0);
if (packageInfo.versionCode > 720000000) {
  // Supports taps - cache this result to avoid calling PackageManager again
} else {
  // Device does not support taps yet
}

Android Wear developers have created thousands of amazing apps for the platform and we can’t wait to see the interactive watch faces you build. If you’re looking for a little inspiration, or just a cool new watch face, check out the Interactive Watch Faces collection on Google Play.

Google Analytics User Conference: G’day Australia

The Australian Google Analytics User Conference is worth clearing your diaries for, with some of the most well-known and respected international industry influencers making their way to Sydney and Melbourne to present at the conference this September.

Hosted by Google Certified Partners, Loves Data, you’ll be learning about the latest features, what’s trending and popular, best practices and uncovering ways to get the most out of Google Analytics. Topics covered include: making sure digital analytics is indispensable to your organisation; applying analytics frameworks to your whole organisation; improving your data quality and collection; data insights you can action; and presenting data to get results.

Presenting the keynote is Jim Sterne, Chairman of the Digital Analytics Association, founder of eMetrics and also known as the godfather of analytics. Joining him are two speakers from Google in the US: Krista Seiden, Google Product Manager and Analytics Advocate and Mike Kwong, Senior Staff Software Engineer.

Other leading international industry influencers presenting at the conference include Simo Ahava (Google Developer Expert; Reaktor), Chris Chapo (Enjoy), Benjamin Mangold (Loves Data), Lea Pica (Consultant, Leapica.com), Chris Samila (Optimizely), Carey Wilkins (Evolytics) and Tim Wilson (Web Analytics Demystified).  

Expect to network with other like-minded data enthusiasts, marketers, developers and strategists, plus get to know the speakers better during the Conference’s Ask Me Anything session. We’ve even covered our bases for those seeking next-level expertise with a marketing or technical masterclass available the day before the conference. Find out more information about the speakers and check out the full program.

Last year’s conference sold out way in advance and this year’s conference is heading in the same direction. Book your tickets now to avoid disappointment. 

Event details Sydney
Masterclass & Conference | 8 & 9 September 2015

Event details Melbourne
Masterclass & Conference | 10 & 11 September 2015
Posted by Will Pryor, Google Analytics team

Develop a sweet spot for Marshmallow: Official Android 6.0 SDK & Final M Preview

By Jamal Eason, Product Manager, Android

Android 6.0 Marshmallow

Whether you like them straight out of the bag, roasted to a golden brown exterior with a molten center, or in fluff form, who doesn’t like marshmallows? We definitely like them! Since the launch of the M Developer Preview at Google I/O in May, we’ve enjoyed all of your participation and feedback. Today with the final Developer Preview update, we’re introducing the official Android 6.0 SDK and opening Google Play for publishing your apps that target the new API level 23 in Android Marshmallow.

Get your apps ready for Android Marshmallow

The final Android 6.0 SDK is now available to download via the SDK Manager in Android Studio. With the Android 6.0 SDK you have access to the final Android APIs and the latest build tools so that you can target API 23. Once you have downloaded the Android 6.0 SDK into Android Studio, update your app project compileSdkVersion to 23 and you are ready to test your app with the new platform. You can also update your app to targetSdkVersion to 23 test out API 23 specific features like auto-backup and app permissions.

Along with the Android 6.0 SDK, we also updated the Android Support Library to v23. The new Android Support library makes it easier to integrate many of the new platform APIs, such as permissions and fingerprint support, in a backwards-compatible manner. This release contains a number of new support libraries including: customtabs, percent, recommendation, preference-v7, preference-v14, and preference-leanback-v17.

Check your App Permissions

Along with the new platform features like fingerprint support and Doze power saving mode, Android Marshmallow features a new permissions model that streamlines the app install and update process. To give users this flexibility and to make sure your app behaves as expected when an Android Marshmallow user disables a specific permission, it’s important that you update your app to target API 23, and test the app thoroughly with Android Marshmallow users.

How to Get the Update

The Android emulator system images and developer preview system images have been updated for supported Nexus devices (Nexus 5, Nexus 6, Nexus 9 & Nexus Player) to help with your testing. You can download the device system images from the developer preview site. Also, similar to the previous developer update, supported Nexus devices will receive an Over-the-Air (OTA) update over the next couple days.

Although the Android 6.0 SDK is final, the devices system images are still developer preview versions. The preview images are near final but they are not intended for consumer use. Remember that when Android 6.0 Marshmallow launches to the public later this fall, you’ll need to manually re-flash your device to a factory image to continue to receive consumer OTA updates for your Nexus device.

What is New

Compared to the previous developer preview update, you will find this final API update fairly incremental. You can check out all the API differences here, but a few of the changes since the last developer update include:

  • Android Platform Change:

    • Final Permissions User Interface — we updated the permissions user interface and enhanced some of the permissions behavior.
  • API Change:

    • Updates to the Fingerprint API — which enables better error reporting, better fingerprint enrollment experience, plus enumeration support for greater reliability.

Upload your Android Marshmallow apps to Google Play

Google Play is now ready to accept your API 23 apps via the Google Play Developer Console on all release channels (Alpha, Beta & Production). At the consumer launch this fall, the Google Play store will also be updated so that the app install and update process supports the new permissions model for apps using API 23.

To make sure that your updated app runs well on Android Marshmallow and older versions, we recommend that you use Google Play’s newly improved beta testing feature to get early feedback, then do a staged rollout as you release the new version to all users.

Google Analytics Conference Nordic in Stockholm, Sweden

Join the Google Analytics Certified Partners for Google Analytics Conference Nordic in Sweden. 
The event takes place August 26 in Stockholm, Sweden, and is followed by a workshop on August 27.
Started based on an initiative by Outfox, who gathered the other Google Analytics Certified Partners, the conference is now returning for the fifth consecutive year.
Our Stockholm conference includes:
 • Case studies from businesses and other organizations, such as The Swedish Society for Nature Conservation, Viaplay, and Storebrand. In other words, Google Analytics for sales, entertainment, non-profits, insurance, and more!
 • Expert presentations by Google Analytics Certified Partners.
 • Opportunities to interact with peers and experts
 • …much more!
The conference is being visited by two top speakers from Google, Sagnik Nandy and Daniel Waisberg.
Sagnik Nandy is technical leader and manager of several Analytics and Reporting efforts in Google. He has hands on experience in building, scaling, deploying and managing large scale systems used by millions of web sites around the world. 
Daniel Waisberg is Analytics Advocate at Google, where he is responsible for fostering Google Analytics by educating and inspiring Online Marketing professionals. Both at Google and his previous positions, Daniel has worked with some of the biggest Internet brands to measure and optimize online behavior. 
Besides meeting Google, you’ll meet several Nordic Google Analytics Certified Partners. You will also meet and learn from several end users who use Google Analytics on a daily basis.
To join us in Stockholm in August, visit the conference site and secure your ticket.

Posted by Lars Johansson, Google Analytics Certified Partner and Google Analytics Premium Authorized Reseller

Barcode Detection in Google Play services

Posted by Laurence Moroney, Developer Advocate

With the release of Google Play services 7.8 we’re excited to announce that we’ve added new Mobile Vision APIs which provides the Barcode Scanner API to read and decode a myriad of different barcode types quickly, easily and locally.

Barcode detection

Classes for detecting and parsing bar codes are available in the com.google.android.gms.vision.barcode namespace. The BarcodeDetector class is the main workhorse — processing Frame objects to return a SparseArray<Barcode> types.

The Barcode type represents a single recognized barcode and its value. In the case of 1D barcode such as UPC codes, this will simply be the number that is encoded in the barcode. This is available in the rawValue property, with the detected encoding type set in the format field.

For 2D barcodes that contain structured data, such as QR codes, the valueFormat field is set to the detected value type, and the corresponding data field is set. So, for example, if the URL type is detected, the constant URL will be loaded into the valueFormat, and the URL property will contain the desired value. Beyond URLs, there are lots of different data types that the QR code can support — check them out in the documentation here.

When using the API, you can read barcodes in any orientation. They don’t always need to be straight on, and oriented upwards!

Importantly, all barcode parsing is done locally, making it really fast, and in some cases, such as PDF-417, all the information you need might be contained within the barcode itself, so you don’t need any further lookups.

You can learn more about using the API by checking out the sample on GitHub. This uses the Mobile Vision APIs along with a Camera preview to detect both faces and barcodes in the same image.

Supported Bar Code Types

The API supports both 1D and 2D bar codes, in a number of sub formats.

For 1D Bar Codes, these are:

AN-13
EAN-8
UPC-A
UPC-E
Code-39
Code-93
Code-128
ITF
Codabar

For 2D Bar Codes, these are:

QR Code
Data Matrix
PDF 417

Learn More

It’s easy to build applications that use bar code detection using the Barcode Scanner API, and we’ve provided lots of great resources that will allow you to do so. Check them out here:

Follow the Code Lab

Read the Mobile Vision Documentation

Explore the sample

Face Detection in Google Play services

Posted by Laurence Moroney, Developer Advocate

With the release of Google Play services 7.8, we announced the addition of new Mobile Vision APIs, which includes a new Face API that finds human faces in images and video better and faster than before. This API is also smarter at distinguishing faces at different orientations and with different facial features facial expressions.

Face Detection

Face Detection is a leap forward from the previous Android FaceDetector.Face API. It’s designed to better detect human faces in images and video for easier editing. It’s smart enough to detect faces even at different orientations — so if your subject’s head is turned sideways, it can detect it. Specific landmarks can also be detected on faces, such as the eyes, the nose, and the edges of the lips.

Important Note

This is not a face recognition API. Instead, the new API simply detects areas in the image or video that are human faces. It also infers from changes in the position frame to frame that faces in consecutive frames of video are the same face. If a face leaves the field of view, and re-enters, it isn’t recognized as a previously detected face.

Detecting a face

When the API detects a human face, it is returned as a Face object. The Face object provides the spatial data for the face so you can, for example, draw bounding rectangles around a face, or, if you use landmarks on the face, you can add features to the face in the correct place, such as giving a person a new hat.

  • getPosition() – Returns the top left coordinates of the area where a face was detected
  • getWidth() – Returns the width of the area where a face was detected
  • getHeight() – Returns the height of the area where a face was detected
  • getId() – Returns an ID that the system associated with a detected face

Orientation

The Face API is smart enough to detect faces in multiple orientations. As the head is a solid object that is capable of moving and rotating around multiple axes, the view of a face in an image can vary wildly.

Here’s an example of a human face, instantly recognizable to a human, despite being oriented in greatly different ways:

The API is capable of detecting this as a face, even in the circumstances where as much as half of the facial data is missing, and the face is oriented at an angle, such as in the corners of the above image.

Here are the method calls available to a face object:

  • getEulerY() – Returns the rotation of the face around the vertical axis — i.e. has the neck turned so that the face is looking left or right [The y degree in the above image]
  • getEulerZ() – Returns the rotation of the face around the Z azis — i.e. has the user tilted their neck to cock the head sideways [The r degree in the above image]

Landmarks

A landmark is a point of interest within a face. The API provides a getLandmarks() method which returns a List , where a Landmark object returns the coordinates of the landmark, where a landmark is one of the following: Bottom of mouth, left cheek, left ear, left ear tip, left eye, left mouth, base of nose, right cheek, right ear, right ear tip, right eye or right mouth.

Activity

In addition to detecting the landmark, the API offers the following function calls to allow you to smartly detect various facial states:

  • getIsLeftEyeOpenProbability() – Returns a value between 0 and 1, giving probability that the left eye is open
  • getIsRighteyeOpenProbability() – Same but for right eye
  • getIsSmilingProbability() – Returns a value between 0 and 1 giving a probability that the face is smiling

Thus, for example, you could write an app that only takes a photo when all of the subjects in the image are smiling.

Learn More

It’s easy to build applications that use facial detection using the Face API, and we’ve provided lots of great resources that will allow you to do so. Check them out here:

Follow the Code Lab

Read the Documentation

Explore the sample

Google Play services 7.8 – Let’s see what’s Nearby!

Posted by Magnus Hyttsten, Developer Advocate, Play services team

Today we’ve finished the roll-out of Google Play services 7.8. In this release, we’ve added two new APIs. The Nearby Messages API allows you to build simple interactions between nearby devices and people, while the Mobile Vision API helps you create apps that make sense of the visual world, using real-time on-device vision technology. We’ve also added optimization and new features to existing APIs. Check out the highlights in the video or read about them below.

Nearby Messages

Nearby Messages introduces a cross-platform API to find and communicate with mobile devices and beacons, based on proximity. Nearby uses a combination of Bluetooth, Wi-Fi, and an ultrasonic audio modem to connect devices. And it works across Android and iOS. For more info on Nearby Messages, check out the documentation and the launch blog post.

Mobile Vision API

We’re happy to announce a new Mobile Vision API. Mobile Vision has two components.

The Face API allows developers to find human faces in images and video. It’s faster, more accurate and provides more information than the Android FaceDetector.Face API. It finds faces in any orientation, allows developers to find landmarks such as the eyes, nose, and mouth, and identifies faces that are smiling and/or have their eyes open. Applications include photography, games, and hands-free user interfaces.

The Barcode API allows apps to recognize barcodes in real-time, on device, in any orientation. It supports a range of barcodes and can detect multiple barcodes at once. For more information, check out the Mobile Vision documentation.

Google Cloud Messaging

And finally, Google Cloud Messaging – Google’s simple and reliable messaging service – has expanded notification to support localization for Android. When composing the notification from the server, set the appropriate body_loc_key, body_loc_args, title_loc_key, and title_loc_args. GCM will handle displaying the notification based on current device locale, which saves you having to figure out which messages to display on which devices! Check out the docs for more info.

And getting ready for the Android M release, we’ve added high and normal priority to GCM messaging, giving you additional control over message delivery through GCM. Set messages that need immediate users attention to high priority, e.g., chat message alert, incoming voice call alert. And keep the remaining messages at normal priority so that it can be handled in the most battery efficient way without impeding your app performance.

SDK Now Available!

You can get started developing today by downloading the Google Play services SDK from the Android SDK Manager.

To learn more about Google Play services and the APIs available to you through it, visit our documentation on Google Developers.

Android Developer Story: Zabob Studio and Buff Studio reach global users with Google Play

Posted by Lily Sheringham, Google Play team

South Korean Games developers Zabob Studio and Buff Studio are start-ups seeking to become major players in the global mobile games industry.

Established in 2013, Zabob Studio was set up by Kwon Dae-hyeon and his wife in 2013. This couple-run business but they have already published ten games, including hits ‘Zombie Judgement Day’ and ‘Infinity Dungeon.’ So far, the company has generated more than KRW ₩140M (approximately $125,000 USD) in sales revenue, with about 60 percent of the studio’s downloads coming from international markets, such as Taiwan and Brazil.

Elsewhere, Buff Studio was founded in 2014 and right from the start, its first game Buff Knight was an instant hit. It was even featured as the ‘Game of the Week’ on Google Play and was included in “30 Best Games of 2014” lists. A sequel is already in the works showing the potential of the franchise.

In this video, Kwon Dae-hyeon, CEO of Zabob Studio ,and Kim Do-Hyeong, CEO of Buff Studio, talk about how Google Play services and the Google Play Developer Console have helped them maintain a competitive edge, market their games efficiently to global users and grow revenue on the platform.

Android Developer Story: Buff Studio – Reaching global users with Google Play

Android Developer Story: Zabob Studio – Growing revenue with Google Play

Check Zabob Studio apps and Buff Knight on Google Play!

We’re pleased to share that Android Developer Stories will now come with translated subtitles on YouTube in popular languages around the world. Find out how to turn on YouTube captions. To read locally translated blog posts, visit the Google developer blog in Korean.

Android Experiments: A celebration of creativity and code

Posted by Roman Nurik, Design Advocate, and Richard The, Google Creative Lab

Android was created as an open and flexible platform, giving people more ways to come together to imagine and create. This spirit of invention has allowed developers to push the boundaries of mobile development and has helped make Android the go-to platform for creative projects in more places—from phones, to tablets, to watches, and beyond. We set out to find a way to celebrate the creative, experimental Android work of developers everywhere and inspire more developers to get creative with technology and code.

Today, we’re excited to launch Android Experiments: a showcase of inspiring projects on Android and an open invitation for all developers to submit their own experiments to the gallery.

The 20 initial experiments show a broad range of creative work–from camera experiments to innovative Android Wear apps to hardware hacks to cutting edge OpenGL demos. All are built using platforms such as the Android SDK and NDK, Android Wear, the IOIO board, Cinder, Processing, OpenFrameworks and Unity. Each project creatively examines in small and big ways how we think of the devices we interact with every day.

Today is just the beginning as we’re opening up experiment submissions to creators everywhere. Whether you’re a student just starting out, or you’ve been at it for a while, and no matter the framework it uses or the device it runs on, Android Experiments is open to everybody.

Check out Android Experiments to view the completed projects, or to submit one of your own. While we can’t post every submission, we’d love to see what you’ve created.

Follow along to see what others build at AndroidExperiments.com.