Android how to use profiler

Android how to use profiler

The new Android Profiler window in the Android Studio 2.4 preview replaces the Android Monitor . The advanced profiling tools display realtime data updates for CPU, memory, and network activity.

The default view in the Android Profiler window, as shown in figure 1, displays a simplified set of data for each profiler. You must first select (1) the device and (2) the app process you want to profile. You can then click one of the graphs to see the more detailed timeline. Every view also includes (3) timeline zoom controls, (4) a button to jump to the realtime data, and an (5) event timeline that shows the lifecycle of activities and see all input events and screen rotation events.

Figure 1. The Android Profiler overview, showing the timeline for all profilers

CPU Profiler

The CPU Profiler shows realtime CPU usage for your app process and system-wide CPU usage on a timeline.

You can select between (1) a traditional instrumented trace (method traces) and a sample-based trace. Then click (2) Record to begin tracing your code. Once you’re done recording, the timeline indicates (3) the captured region, you can (4) view the state of each thread, and (5) see either a top-down list, bottom-up list, or flame chart for the methods that have executed during the recording.

Figure 2. The CPU Profiler, with results from sampled method tracing

Memory Profiler

The Memory Profiler view combines the features from Heap Viewer, Allocation Tracker, and Memory Monitor, so you can view realtime count of allocated objects and garbage collection events on a timeline, capture heap dumps, and record memory allocations, all from one interface.

The Memory Profiler shows the amount of memory used by your app on a timeline, according to the memory size on the left y-axis. Each memory type (such as Java, Native, and Graphics) is indicated with a different color in a stacked graph. The total number of objects allocated by your app is indicated with a dotted line, according to the y-axis on the right. Values for each are also specified in a key at the top of the graph.

The toolbar at the top of the window allows you to (1) Force garbage collection, (2) Capture a heap dump , and (3) Record memory allocations.

When you capture a heap dump or record memory allocations, the (4) recording event is indicated on the timeline. Your results then appear in (5) the pane below the timeline. In figure 2, this window shows the memory allocation results during the time indicated in the timeline. When viewing either a heap dump or memory allocations, you can select a class name from this list to view the (6) list of instances on the right. Clicking an instance there, reveals (7) a third pane below, showing either the stack trace for where that memory was allocated (when viewing the allocation record), or the remaining references to that object (when viewing a heap dump).

You can also capture a heap dump while memory allocation tracking is turned on to get stack traces in the heap dump (for objects allocated after allocation was turned on).

Network Profiler

The Network Profiler displays realtime network activity on a timeline, showing data sent and received, as well as the current number of connections. At the top of the window, you can see the event timeline and (1) radio power state (high/low) vs Wi-Fi.

On the timeline, you can (2) click and drag to select a portion of the timeline to inspect the traffic. The (3) window below then shows files sent and received during the selected portion of the timeline, including file name, size, type, status, and time. You can sort this list by clicking any of the column headers. You also see a detailed breakdown of the selected portion of the timeline, showing when each file was sent or received.

Читайте также:  Андроид включить синхронизацию времени

Click a file name to view (4) detailed information about a selected file sent or received. Click the tabs to view the response data, header information, or the call stack.

Network Connection Troubleshooting

If the Network Profiler detects traffic values, but cannot identify any supported network requests, you will receive the following error message:

“No connections supported for instrumentation.”

Currently, the Network Profiler only supports the HttpURLConnection library for network connections. If your app uses another network connection library, you will not be able to view your network activity in the Network Profiler. If you have received this error message, but your app does use HttpURLConnection , please report a bug so we can investigate the issue.

Источник

Native Memory Profiling with Android Studio 4.1

This is second in a two part series on What’s New in Profilers in Android Studio 4.1. Our previous post focused on What’s New in System Trace .

We’ve heard from those of you using C++ that debugging native memory can be fairly difficult, particularly in games. With Android Studio 4.1, we’ve implemented the ability to record call stacks of native memory allocations in our Memory Profiler. The native memory recording is built on top of the Perfetto backend, the next generation performance instrumentation and tracing solution for Android.

A common technique when trying to debug memory issues is to understand what is allocating memory and what is freeing memory. The rest of this article will walk you through how to use the Native Memory Profiler to help track down a leak, using the Gpu Emulation Stress Test as an example project.

Getting Started

When a memory leak is suspected, it’s often a good idea to start at a high level and watch for patterns in the system memory. To do this click the profile button in Android Studio, and enter the memory profiler for more detailed memory tracking information.

After running the simulation a few times we can see a few interesting patterns.

  1. The GPU memory increases as one may expect from a GPU emulation app, however it also looks like this memory gets properly cleaned up after the Activity is finished.
  2. The Native memory grows each time we enter the GpuEmulationStressTestActivity, however this memory does not seem to reset after each run, which might be indicative of a leak.

Native Memory Table View

Starting with Android Studio 4.1 Canary 6, we can grab a recording of native memory allocations to analyze why memory isn’t being released. To do this with the GPU emulation app, I stopped the running app and started profiling a fresh instance. Starting from a clean state, especially when looking at an unfamiliar codebase, can help narrow our focus. From the memory profiler I captured a native allocation recording throughout the duration of the GPU emulation demo. To do this restart the app by selecting Run-> Profile ‘app’. After the application starts and the profile window opens, click on the memory profiler and select “record native allocation”

The table view is useful for games/applications that use libraries implementing their own allocators highlighting malloc calls that are made outside of new.

When a recording is loaded, the data is first presented in a table. The table shows the leaf functions calling malloc. In addition to the function name, the table shows module, count, size, and delta. This information is sampled so it is possible not all malloc / free calls will be captured. This largely depends on the sampling rate, which will be discussed a bit later.

It is also useful to know where these functions that allocate memory are being called from. There are two ways to visualize this information. The first is by changing the “Arrange by allocation method” dropdown to “Arrange by call stack”. The table shows a tree of callstacks, similar to what you may expect from a CPU recording. If the current project has symbols (which is usually the case for debuggable builds; if you’re profiling an external APK check out the guide here) they will automatically be picked up and used. This allows you to right click on a function and “Jump to source”.

Читайте также:  Фишки для ватсапа андроид

Memory Visualization (Native and non-native)

We’ve also added a new flame chart visualization to the memory profilers, allowing you to quickly see what callstacks are responsible for allocating the most memory. This is especially useful when a call stack is really deep.

There are four ways you can sort this data along the X axis:

  • “Allocation Size” is the default, showing the total amount of memory tracked.
  • “Allocation Count” shows the total number of objects allocated.
  • “Total Remaining Size” is the size of memory sampled throughout the capture that was not freed before the end of the capture.
  • “Total Remaining Count”, like the remaining size, is the count of objects captured but not freed before the end of the capture.

From here we can right click on the call stacks and select “Jump to Source” to take us to the line of code responsible for the allocation. However, taking a second glance at the visualization, we notice that the common parent, WorldState, is responsible for multiple leaks. To validate this, it can help to filter the results.

Filtering / Navigation

Like with the table view, the chart can be filtered using the filter bar. When the filter is used, the data in the chart is automatically updated to show only call stacks that have functions matching the word/regex searched.

70MB of our total assumed leak

Sometimes call stacks can get fairly long, or there just isn’t enough room to display the function name on screen. To assist with this, ctrl + mouse wheel will zoom in/out, or you can click on the chart to use W,A,S,D to navigate.

Verifying the findings

Adding a breakpoint and running the Emulation twice quickly reveals that on the second run we cause the leak by overwrite the pointer from our first run.

As a quick fix to the sample we can delete the world after it is marked done, profiling the application again to validate the fix.

Ending where we started by looking at the high level memory stats. Validating that deleting sWorld at the end of the simulation frees up the 70mb held by our first run.

Startup profiling and sample rate setting.

The sample above shows how native memory tracking can be used to find and fix memory leaks. Another common use for native memory tracking is understanding where memory is going during startup of the application. In Android Studio 4.1, we also added the ability to capture native memory recordings from the startup of the application. This is available in the “Run/Debug Configurations” dialog under the “Profiling” tab.

You can customize the sampling interval or record memory at startup in the Run configuration dialog.

Here you can also change the sampling rate for new captures. A smaller sampling rate can have a large impact on overall performance, while a larger sampling rate can miss some allocations. Different sampling rates work for different types of memory problems.

Wrapping up

With the new native memory profiler finding memory leaks and understanding where memory is being held on to just got a little bit easier. Give the native memory profiler a try in Android Studio 4.1, and leave any feedback on our bug tracker. For additional tips and tricks be sure to also check out our talk earlier this year at the Google for Games summit, Android memory tools and best practices.

Источник

Profiling Android Apps

Before deploying your app to an app store, it’s important to identify and fix any performance bottlenecks, excessive memory usage issues, or inefficient use of network resources. Two profiler tools are available to serve this purpose:

  • Xamarin Profiler
  • Android Profiler in Android Studio

This guide introduces the Xamarin Profiler and provides detailed information for getting started with using the Android Profiler.

Читайте также:  Android передача данных сервер

Xamarin Profiler

The Xamarin Profiler is a standalone application that is integrated with Visual Studio and Visual Studio for Mac for profiling Xamarin apps from within the IDE. For more information about using the Xamarin Profiler, see Xamarin Profiler.

You must be a Visual Studio Enterprise subscriber to unlock the Xamarin Profiler feature in either Visual Studio Enterprise on Windows or Visual Studio for Mac.

Android Studio Profiler

Android Studio 3.0 and later includes an Android Profiler tool. You can use the Android Profiler to measure the performance of a Xamarin Android app built with Visual Studio – without the need for a Visual Studio Enterprise license. However, unlike the Xamarin Profiler, the Android Profiler is not integrated with Visual Studio and can only be used to profile an Android application package (APK) that has been built in advance and imported into the Android Profiler.

Launching a Xamarin Android app in Android Profiler

The following steps explain how to launch an Xamarin Android application in Android Studio’s Android Profiler tool. In the example screenshots below, the Xamarin Forms XamagonXuzzle app is built and profiled using Android Profiler:

In the Android project build options, disable Use Shared Runtime. This ensures that the Android application package (APK) is built without a dependency on the shared development-time Mono runtime.

Build the app for Debug and deploy it to a physical device or emulator. This causes a signed Debug version of the APK to be built. For the XamagonXuzzle example, the resulting APK is named com.companyname.XamagonXuzzle-Signed.apk.

Open the project folder and navigate to bin/Debug. In this folder, locate the Signed.apk version of the app and copy it to a conveniently-accessible place (such as the desktop). In the following screenshot, the APK com.companyname.XamagonXuzzle-Signed.apk is located and copied to the desktop:

Launch Android Studio and select Profile or debug APK:

In the Select APK File dialog, navigate to the APK that you built and copied earlier. Select the APK and click OK:

Android Studio will load the APK and dissassembles classes.dex:

After the APK is loaded, Android Studio displays the following project screen for the APK. Right-click the app name in the tree view on the left and select Open Module Settings:

Navigate to Project Settings > Modules, select the -Signed node of the app, then click :

In the Module SDK pull-down menu, select the Android SDK level that was used to build the app (in this example, API level 26 was used to build XamagonXuzzle):

Click Apply and OK to save this setting.

Launch the profiler from the toolbar icon:

Select the deployment target for running/profiling the app and click OK. The deployment target can be a physical device or a virtual device running in an emulator. In this example, a Nexus 5X device is used:

After the profiler starts, it will take a few seconds for it to connect to the deployment device and the app process. While it is installing the APK, Android Profiler will report No connected devices and No debuggable processes.

After several seconds, Android Profiler will complete APK installation and launch the APK, reporting the device name and the name of the app process being profiled (in this example, LGE Nexus 5X and com.companyname.XamagonXuzzle, respectively):

After the device and debuggable process are identified, Android Profiler begins profiling the app:

If you tap the RANDOMIZE button on XamagonXuzzle (which causes it to shift and randomize tiles), you will see the CPU usage increase during the app’s randomization interval:

Using the Android Profiler

Detailed information for using the Android Profiler is included in the Android Studio documentation. The following topics will be of interest to Xamarin Android developers:

CPU Profiler – Explains how to inspect the app’s CPU usage and thread activity in real-time.

Memory Profiler – Displays a real-time graph of the app’s memory usage, and includes a button to record memory allocations for analysis.

Network Profiler – Displays real-time network activity of data sent and received by the app.

Источник

Оцените статью