Libgdx android studio платформер

A Simple Game

Before diving into the APIs provided by libGDX, let’s create a very simple “game”, that touches each module provided by the framework, to get a feeling for things. We’ll introduce a few different concepts without going into unnecessary detail.

Set Up a Dev Environment

Generate a Project

In the following, we’ll look at:

  • Basic file access
  • Clearing the screen
  • Drawing images
  • Using a camera
  • Basic input processing
  • Playing sound effects

Project Setup

Follow the steps in the Generating a Project guide. In the following, we will use these settings:

  • Application name: drop
  • Package name: com.badlogic.drop
  • Game class: Drop

Now fill in the destination. If you are interested in Android development, be sure to check that option an provide the Android SDK folder. For the purpose of this tutorial, we will uncheck the iOS sub project (as you would need OS X to run it) and all extensions (extensions are a more advanced topic).

Once imported into your IDE, you should have 5 projects or modules: the main one drop , and the sub projects android (or drop-android under Eclipse), core / drop-core , desktop / drop-desktop , and html / drop-html .

To launch or debug the game, see the page Importing & Running a Project.

If we just run the project, we will get an error: Couldn’t load file: badlogic.jpg . Your Run Configuration has to be properly configured first: Select as working directory PATH_TO_YOUR_PROJECT/​drop/android/assets . If we run it now, we will get the default ‘game’ generated by the startup app: a Badlogic Games image on a red background. Not too exciting, but that’s about to change.

The Game

The game idea is very simple:

  • Catch raindrops with a bucket.
  • The bucket is located in the lower part of the screen.
  • Raindrops spawn randomly at the top of the screen every second and accelerate downwards.
  • Player can drag the bucket horizontally via the mouse/touch or move it via the left and right cursor keys.
  • The game has no end — think of it as a zen-like experience 🙂

The Assets

We need a few images and sound effects to make the game look somewhat pretty. For the graphics we need to define a target resolution of 800×480 pixels (landscape mode on Android). If the device the game is run on does not have that resolution, we simply scale everything to fit on the screen.

Note: for high profile games you might want to consider using different assets for different screen densities. This is a big topic on its own and won’t be covered here.

The raindrop and the bucket should take up a small(ish) portion of the screen vertically, so we’ll let them have a size of 64×64 pixels.

The following sources provide some sample assets:

  • water drop sound by junggle, see here
  • rain sounds by acclivity, see here
  • droplet sprite by mvdv, see here
  • bucket sprite by mvdv, see here

To make the assets available to the game, we have to place them in the Android assets folder. I named the 4 files: drop.wav, rain.mp3, droplet.png and bucket.png and put them in android/assets/ . We only need to store the assets once, as both the desktop and HTML5 projects are configured to ‘see’ this folder through different means. After that, depending on your IDE you may have to refresh the project tree to make the new files known (in Eclipse, right click -> Refresh), otherwise you may get a ‘file not found’ runtime exception.

Configuring the Starter Classes

Given our requirements we can now configure our different starter classes. We’ll start with the desktop project. Open the DesktopLauncher.java class in desktop/src/… (or drop-desktop under Eclipse). We want a 800×480 window and set the title to “Drop”. The code should look like this:

If you are only interested in desktop development, you can skip the rest of this section.

Moving on to the Android project, we want the application to be run in landscape mode. For this we need to modify AndroidManifest.xml in the android (or drop-android ) root directory, which looks like this:

The setup tool already filled in the correct values for us, android:screenOrientation is set to “landscape”. If we wanted to run the game in portrait mode we would have set that attribute to “portrait”.

We also want to conserve battery and disable the accelerometer and compass. We do this in the AndroidLauncher.java file in android/src/… (or drop-android ), which should look something like this:

We cannot define the resolution of the Activity , as it is set by the Android operating system. As we defined earlier, we’ll simply scale the 800×480 target resolution to whatever the resolution of the device is.

Finally we want to make sure the HTML5 project also uses a 800×480 drawing area. For this we modify the HtmlLauncher.java file in html/src/… (or drop-html ):

Читайте также:  Как выключить андроид устройство

All our starter classes are now correctly configured, let’s move on to implementing our fabulous game.

The Code

We want to split up our code into a few sections. For the sake of simplicity we keep everything in the Drop.java file of the Core project, located in core/src/… (or drop-core in Eclipse).

Loading the Assets

Our first task is to load the assets and store references to them. Assets are usually loaded in the ApplicationAdapter.create() method, so let’s do that:

For each of our assets we have a field in the Drop class so we can later refer to it. The first two lines in the create() method load the images for the raindrop and the bucket. A Texture represents a loaded image that is stored in video ram. One can usually not draw to a Texture. A Texture is loaded by passing a FileHandle to an asset file to its constructor. Such FileHandle instances are obtained through one of the methods provided by Gdx.files . There are different types of files, we use the “internal” file type here to refer to our assets. Internal files are located in the assets directory of the Android project. As seen before, the desktop and HTML5 projects reference the same directory.

Next we load the sound effect and the background music. libGDX differentiates between sound effects, which are stored in memory, and music, which is streamed from wherever it is stored. Music is usually too big to be kept in memory completely, hence the differentiation. As a rule of thumb, you should use a Sound instance if your sample is shorter than 10 seconds, and a Music instance for longer audio pieces.

Note: libGDX supports MP3, OGG and WAV files. Which format you should use, depends on you specific needs, as each format has its own advantages and disadvantages. For example, WAV files are quite large compared to other formats, OGG files don’t work on RoboVM (iOS) nor with Safari (GWT), and MP3 files have issues with seemless looping.

Loading of a Sound or Music instance is done via Gdx.audio.newSound() and Gdx.audio.newMusic() . Both of these methods take a FileHandle , just like the Texture constructor.

At the end of the create() method we also tell the Music instance to loop and start playback immediately. If you run the application you’ll see a nice pink background and hear the rain fall.

A Camera and a SpriteBatch

Next up, we want to create a camera and a SpriteBatch . We’ll use the former to ensure we can render using our target resolution of 800×480 pixels no matter what the actual screen resolution is. The SpriteBatch is a special class that is used to draw 2D images, like the textures we loaded.

We add two new fields to the class, let’s call them camera and batch:

In the create() method we first create the camera like this:

This will make sure the camera always shows us an area of our game world that is 800×480 units wide. Think of it as a virtual window into our world. We currently interpret the units as pixels to make our life a little easier. There’s nothing preventing us from using other units though, e.g. meters or whatever you have. Cameras are very powerful and allow you to do a lot of things we won’t cover in this basic tutorial. Check out the rest of the developer guide for more information.

Next we create the SpriteBatch (we are still in the create() method):

We are almost done with creating all the things we need to run this simple game.

Adding the Bucket

The last bits that are missing are representations of our bucket and the raindrop. Let’s think about what we need to represent those in code:

  • A bucket/raindrop has an x/y position in our 800×480 units world.
  • A bucket/raindrop has a width and height, expressed in the units of our world.
  • A bucket/raindrop has a graphical representation, we already have those in form of the Texture instances we loaded.

So, to describe both the bucket and raindrops we need to store their position and size. libGDX provides a Rectangle class which we can use for this purpose. Let’s start by creating a Rectangle that represents our bucket. We add a new field:

In the create() method we instantiate the Rectangle and specify its initial values. We want the bucket to be 20 pixels above the bottom edge of the screen, and centered horizontally.

We center the bucket horizontally and place it 20 pixels above the bottom edge of the screen. Wait, why is bucket.y set to 20, shouldn’t it be 480 — 20? By default, all rendering in libGDX (and OpenGL) is performed with the y-axis pointing upwards. The x/y coordinates of the bucket define the bottom left corner of the bucket, the origin for drawing is located in the bottom left corner of the screen. The width and height of the rectangle are set to 64×64, our small-ish portion of our target resolutions height.

Читайте также:  Android 2 1 для htc hero

Note: it is possible to change this setup so the y-axis points down and the origin is in the upper left corner of the screen. OpenGL and the camera class are so flexible that you use have pretty much any kind of viewing angle you want, in 2D and 3D. However, this is not recommended.

Rendering the Bucket

Time to render our bucket. The first thing we want to do is to clear the screen with a dark blue color. Simply change the render() method to look like this:

The arguments for ScreenUtils.clear(r, g, b, a) are the red, green, blue and alpha component of that color, each within the range [0, 1].

Next we need to tell our camera to make sure it is updated. Cameras use a mathematical entity called a matrix that is responsible for setting up the coordinate system for rendering. These matrices need to be recomputed every time we change a property of the camera, like its position. We don’t do this in our simple example, but it is generally a good practice to update the camera once per frame:

Now we can render our bucket:

The first line tells the SpriteBatch to use the coordinate system specified by the camera. As stated earlier, this is done with something called a matrix, to be more specific, a projection matrix. The camera.combined field is such a matrix. From there on the SpriteBatch will render everything in the coordinate system described earlier.

Next we tell the SpriteBatch to start a new batch. Why do we need this and what is a batch? OpenGL hates nothing more than telling it about individual images. It wants to be told about as many images to render as possible at once.

The SpriteBatch class helps make OpenGL happy. It will record all drawing commands in between SpriteBatch.begin() and SpriteBatch.end() . Once we call SpriteBatch.end() it will submit all drawing requests we made at once, speeding up rendering quite a bit. This all might look cumbersome in the beginning, but it is what makes the difference between rendering 500 sprites at 60 frames per second and rendering 100 sprites at 20 frames per second.

Making the Bucket Move (Touch/Mouse)

Time to let the user control the bucket. Earlier we said we’d allow the user to drag the bucket. Let’s make things a little bit easier. If the user touches the screen (or presses a mouse button), we want the bucket to center around that position horizontally. Adding the following code to the bottom of the render() method will do this:

First we ask the input module whether the screen is currently touched (or a mouse button is pressed) by calling Gdx.input.isTouched() . Next we want to transform the touch/mouse coordinates to our camera’s coordinate system. This is necessary because the coordinate system in which touch/mouse coordinates are reported might be different than the coordinate system we use to represent objects in our world.

Gdx.input.getX() and Gdx.input.getY() return the current touch/mouse position (libGDX also supports multi-touch, but that’s a topic for a different article). To transform these coordinates to our camera’s coordinate system, we need to call the camera.unproject() method, which requests a Vector3 , a three dimensional vector. We create such a vector, set the current touch/mouse coordinates and call the method. The vector will now contain the touch/mouse coordinates in the coordinate system our bucket lives in. Finally we change the position of the bucket to be centered around the touch/mouse coordinates.

Note: it is very, very bad to instantiate a lot of new objects, such as the Vector3 instance. The reason for this is the garbage collector has to kick in frequently to collect these short-lived objects. While on the desktop this not such a big deal (due to the resources available), on Android the GC can cause pauses of up to a few hundred milliseconds, which results in stuttering. In this particular case, if you want to solve this issue, simply make touchPos a private final field of the Drop class instead of instantiating it all the time.

Note: touchPos is a three dimensional vector. You might wonder why that is if we only operate in 2D. OrthographicCamera is actually a 3D camera which takes into account z-coordinates as well. Think of CAD applications, they use 3D orthographic cameras as well. We simply abuse it to draw 2D graphics.

Making the Bucket Move (Keyboard)

On the desktop and in the browser we can also receive keyboard input. Let’s make the bucket move when the left or right cursor key is pressed.

We want the bucket to move without acceleration, at two hundred pixels/units per second, either to the left or the right. To implement such time-based movement we need to know the time that passed in between the last and the current rendering frame. Here’s how we can do all this:

Читайте также:  X 96 mini smart android tv box настройка

The method Gdx.input.isKeyPressed() tells us whether a specific key is pressed. The Keys enumeration contains all the keycodes that libGDX supports. The method Gdx.graphics.getDeltaTime() returns the time passed between the last and the current frame in seconds. All we need to do is modify the bucket’s x-coordinate by adding/subtracting 200 units times the delta time in seconds.

We also need to make sure our bucket stays within the screen limits:

Adding the Raindrops

For the raindrops we keep a list of Rectangle instances, each keeping track of the position and size of a raindrop. Let’s add that list as a field:

The Array class is a libGDX utility class to be used instead of standard Java collections like ArrayList . The problem with the latter is that they produce garbage in various ways. The Array class tries to minimize garbage as much as possible. libGDX offers other garbage collector aware collections such as hash-maps or sets as well.

We also need to keep track of the last time we spawned a raindrop, so we add another field:

We’ll store the time in nanoseconds, that’s why we use a long.

To facilitate the creation of raindrops we’ll write a method called spawnRaindrop() which instantiates a new Rectangle , sets it to a random position at the top edge of the screen and adds it to the raindrops array.

The method should be pretty self-explanatory. The MathUtils class is a libGDX class offering various math related static methods. In this case it will return a random value between zero and 800 — 64. The TimeUtils is another libGDX class that provides some very basic time related static methods. In this case we record the current time in nano seconds based on which we’ll later decide whether to spawn a new drop or not.

In the create() method we now instantiate the raindrops array and spawn our first raindrop:

We need to instantiate that array in the create() method:

Next we add a few lines to the render() method that will check how much time has passed since we spawned a new raindrop, and creates a new one if necessary:

We also need to make our raindrops move, let’s take the easy route and have them move at a constant speed of 200 pixels/units per second. If the raindrop is beneath the bottom edge of the screen, we remove it from the array.

The raindrops need to be rendered. We’ll add that to the SpriteBatch rendering code which looks like this now:

One final adjustment: if a raindrop hits the bucket, we want to playback our drop sound and remove the raindrop from the array. We simply add the following lines to the raindrop update loop:

The Rectangle.overlaps() method checks if this rectangle overlaps with another rectangle. In our case, we tell the drop sound effect to play itself and remove the raindrop from the array.

Cleaning Up

A user can close the application at any time. For this simple example there’s nothing that needs to be done. However, it is in general a good idea to help out the operating system a little and clean up the mess we created.

Any libGDX class that implements the Disposable interface and thus has a dispose() method needs to be cleaned up manually once it is no longer used. In our example that’s true for the textures, the sound and music and the SpriteBatch . Being good citizens, we override the ApplicationAdapter.dispose() method as follows:

Once you dispose of a resource, you should not access it in any way.

Disposables are usually native resources which are not handled by the Java garbage collector. This is the reason why we need to manually dispose of them. libGDX provides various ways to help with asset management. Read the rest of the development guide to discover them.

Handling Pausing/Resuming

Android has the notation of pausing and resuming your application every time the user gets a phone call or presses the home button. libGDX will do many things automatically for you in that case, e.g. reload images that might have gotten lost (OpenGL context loss, a terrible topic on its own), pause and resume music streams and so on.

In our game there’s no real need to handle pausing/resuming. As soon as the user comes back to the application, the game continues where it left. Usually one would implement a pause screen and ask the user to touch the screen to continue. This is left as an exercise for the reader — check out the ApplicationAdapter.pause() and ApplicationAdapter.resume() methods.

The Full Source

Here’s the tiny source for our simple game:

Where to go from here

This was a very basic example of how to use libGDX to create a minimalistic game. There are quite a few things that can be improved from here on. Your next steps should most certainly entail looking at Screen s and Game s. To learn about these, there is a second tutorial following on from this one.

Источник

Оцените статью