Bad apple all versions

Bad apple all versions

Bad Apple printed out on the console with Python!

A word of disclaimer, while the final code is somewhat original, this project is an amalgamation of different code snippets that I found online. As the main YouTube Video begins to gain traction, I feel the need to inform the audience that this code is NOT ENTIRELY ORIGINAL.

The concept of playing Bad Apple!! on a Command Line Interface (CLI) is not a novel idea and I am definitely not the first.

There are many iterations and versions around YouTube and I wanted to give it a shot. The intent of posting the video on YouTube was to show a few friends of a simple weekend project that I whipped up in Python.

My own video can be found here.

Running this code/Pre-requisites

Thanks to TheHusyin for adding a requirements.txt file for easier installs.

You can either git clone or download a ZIP of this repository.

git clone https://github.com/CalvinLoke/bad-apple

Then, ensure that you set your terminal to the directory of this repository.

Install the necessary dependencies and packages by using:

pip install -r requirements.txt

And to run the code:

And just follow the on-screen prompts.

Currently, my implementation of a rudimentary static time.sleep() function results in an incremental error over time. This thus leads to the frame accuracy drifting.

UPDATE on 22/04/21

With the replacement of the playsound library with pygame , the error over time seemed to have been fixed. Though, further improvements and optimizations to the code can still be done. As of current, performance is still not optimal. A major bottleneck lies in the IOPS when dealing with the .txt files, am still trying to find a better implementation.

I am also looking into improving frame extraction and generation times.

UPDATE ON 23/04/21

It seems that frame extraction is heavily bottle-necked by the drive’s IOPS, and adding threads did not seem to expediate frame extraction further. I have created some rudimentary code for process-based and threading-based frame extraction, am looking to implement it for the ASCII generation soon.

UPDATE ON 25/04/21

I got about to implementing multi-processing for both frame extraction and ASCII generation. Though it seems that my implementation of threading/processing is still very botchy and thus asset generation is still sub-optimal. Not too sure on how far I would want to take this project, though my main priority right now would be to adjust frame timings.

SECOND UPDATE ON 25/04/21

Simply by replacing the primitive time.sleep() function with the fpstimer library, frame-time accuracy has been drastically improved, will be slowing down my code optimizations for playback from now onwards.

Though the main concern right now is trying to optimize asset generation times.

THIRD UPDATE ON 25/04/21

Changed the approach of storing assets. Should significantly reduce asset generation times, averaging arond 10

15 seconds using single thread. Will still look into threading to further expedite asset generation. However, with touhou_bad_apple_v4.0.py , progress will now slow down as I finally close the chapter of this project.

UPDATE ON 27/04/21

It looks like most of the issues have been rectified, and the code has reached desirableh performance. While I could further boost ASCII generation and add new functionality to the code, I feel that it would be over-engineering such a simple project. What started out as a weekend project blew up to such proportions, and led me to learn many new and interesting concepts along the way.

I really would like to thank JasperTecHK for his recommendations and suggestions along the way. His input was what really led me to return to this project after two dead weeks.

Читайте также:  Распознавание звуков айфон для чего используется

As such, major updates to the code would come much slower now, as the current iteration of the project has far exceeded my orginal goal. Though, it would be interesting to further develop v4.5 to have color support, but I would presume that requires its own development cycle. Once again, I really would like to thank all the contributors to this simple and dumb piece of code that I wrote in 24 hours.

Current known issues and bugs

Despite being a somewhat simple program, my crappy implementation has led to a lot of unresovled bugs and issues. I am currently looking at fixing some of them.

  1. block=False is not supported in Linux (Only for v2.0 and below)

I am currently trying to find alternatives to the playsound library. Using two different threads is not an option currently as I was running into desynchronization issues

This issue has been fixed in v3 v2.5, alongside other performance improvements.

  1. No such file or directory: ‘ExtractedFrames/BadApple_1.jpg’ (Only for v3.0 and v2.5)

Not really sure how this is happening, but will be looking into fixing it. I was unable to replicate the error but I assume it is due to my botchy implementation of file directories for the assets

Issue could be due to host machine not having ffmpeg installed. Ensure that you have ffmpeg installed and run the script again. v4 and v4.5 will not return this error, though will need to do some limit testing to figure it out.

First rudimentary version that accomplishes basic frame extraction and animation. Utilizes threads, but suffers from heavy synchronization issues.

Extended version that includes a «GUI», some basic file I/O. Suffers from slight synchronization issues. Core program logic was completed in 24 hours with some minor tweaks and comments afterwards.

  1. touhou_bad_apple_v3.py ==> Renamed to touhou_bad_apple_v2.5.py

Current development version. Improved frame time delay and better file I/O. Looking to implement threading to expedite frame extraction and ASCII conversion. Play-testing version to use py-game. Doesn’t really warrant a full version increment, will be updating the name to v2.5 or something like that once the new v4 is ready

Slightly better version due to incorporation of pygame for music playing. Rectifies issue when attempting to play on Linux based environments since the older playsound library did not support blocking=False on said environments.

Still has rudimentary frame extraction and ASCII generation on single thread/process, which makes asset generation significantly longer.

  1. touhou_bad_apple_v4.py ==> Renamed to touhou_bad_apple_v3.0.py

(Almost) re-written as the previous code was getting to messy to work with. Functions from previous versions are still used though.

Will be renamed to v3 once I improve asset generation times with better threading code. However, «v4» is currently the most frame-accurate version thanks to the fpstimer library. And subsequent changes are only for smaller performance optimizations.

Rewritten to incorporate multiprocess, though implementation is very janky. Overall program structure was also refactored a bit to clean up main() function. Asset generation times were reduced a bit, but the double for loop meant that it generation times are close to a minute.

  1. touhou_bad_apple_v4-5.py ==> Renamed to touhou_bad_apple_v4.0.py

Once again my dumb naming schemes kick in again. After some toying around, I decided to scrap the .txt file generation and skip right storing ASCII within memory. This version completely rewrites the asset generation algorithm. Instead of the old

Video => Extracted_Images (stored in storage) => ASCII Characters (stored in memory) => .txt (stored in storage)

process, ASCII generation is done on the image stored within memory, so

Video => Extracted_Images (stored in memory) => ASCII Characters (stored in memory) => Internal list (stored in memory)

Makes more sense as compared to older iterations and significantly cuts down asset generation times.

Читайте также:  От чего зависит состояние аккумулятора айфона

While this means that 10 or 20 seconds is required for ASCII generation, it eliminates storage IO bottleneck. Also frees up a lot of storage space on host system. Overall probably the best one yet?

  1. touhou_bad_apple_v5.py ==> Renamed to touhou_bad_apple_v4.5.py

Honestly I should not even get a job at file versioning. This version essentially allows the user to ASCII-fy any video provided that they have the video file in the root directory.

The main functions will be listed here.

Reads the files from the previously generated ASCII .txt files and prints it out onto the console.

Plays the bad apple audio track.

progress_bar(current, total, barLength=25)

A simple progress bar function that generates the status of both frame extraction and ASCII frame generation. This code was taken from a StackOverflow thread.

current is the current value/progress of the process.

total is the desired/intended end value of the process.

barLength=25 sets the length of the progress bar. (Default is 25 characters)

ASCII Frame generation

Not a particular function, but a group of functions.

These functions are called in the ascii_generator() function to convert image files to ASCII format and stores them into .txt files.

Note that the ASCII conversion code is not original, and was taken from here.

Words of acknowledgements

I should give credit where credit is due, and here is a section dedicated to that.

ZUN, and this incredible work on the Touhou project over the past decades.

Ronald Macdonald, for making the MIDI Arrangement of the Bad-Apple!! used.

GitHub users karoush1, JasperTecHK, TheHusyin, Mirageofmage for their comments and bugfixes.

About

Bad Apple printed out on the console with Python!

Источник

Bad apple all versions

Bad Apple — BBC Micro — Teletext

Video codec and player for BBC MODE 7 (aka Teletext)

See the demo version on our site.

«Bad Apple» — The definitive BBC Micro/Teletext Version

The Tou Hou Bad Apple video has become a benchmark for pushing retro computing power to the limits. While it has been ported to many other 8-bit platforms, we are now pleased to present the definitive BBC Micro version in glorious Teletext pixel graphics.

Our version is a full 3m21s of video playback, played back at 25 frames per second in Teletext / MODE 7.

MODE 7 on the BBC Micro used a Mullard SAA5050 Teletext display/decoder chip which (apart from from subtle implementation differences) is the same Teletext chip used in analogue TVs. It is 40×25 characters, supporting 8 primary colours, with support for text characters and basic graphical effects using control codes embedded into each character row. Support for teletext on the BBC Micro was an original requirement of the BBC’s specification for the machine due to their own use of broadcast teletext (Ceefax).

The music is a custom VGM chiptune, hand designed by Inverse Phase for the BBC Micro’s SN76489. You can support IP’s excellent work by becoming a patron here.

Intro art by Horsenburger, and you can buy awesome stuff from Horsenburger’s store.

The code, music & screens are crammed into a standard 8-bit 2MHz 6502 based BBC Micro’s 32Kb RAM, and the video is streamed into memory track by track, after being heavily compressed to fit on one single 400Kb double sided floppy disk image.

For more information on teletext, take a look at the following sites:

  • TeletextR — News & Happenings in the world of Teletext
  • Edit.TF — A Web Based Teletext Editor
  • Facebook Teletext Group — Teletext Community Group
  • Dan Farrimond’s Art — Awesome teletext art
  • Horsenburger’s Art — Addtional Awesome teletext art

We wanted to create a demo that would work on a standard 32Kb BBC Micro Model B with a single double sided 400Kb disk image. Clearly this would be a challenge given the memory constraints — somehow we’d need to squeeze over 3 minutes of music and video into the available system RAM and disk space.

Читайте также:  Нашел айфон пишет айфон отключен

The music is played back on the BBC Micro using raw register updates every 50Hz (using interrupts) to the SN76489 sound chip. Our musician (Inverse Phase) created the music in Deflemask, and exported a 50Hz 150bpm NTSC 3.58Mhz VGM file (which is essentially a raw stream of register data updates).

This VGM file was then processed using Simon’s VGM conversion tool to transpose the music to the BBC Micros 4Mhz clock speed (so that it sounds correct because the SN76489 generates frequencies that are based on the system clock signal fed into it).

The same script also outputs the VGM file in a more compact binary format, which takes up a lot less memory and is easier to compress.

Finally this data was compressed using Exomizer, which reduced the filesize to 10.3Kb. Our first compression attempt reduced the file to 19Kb which wasn’t enough to fit it into memory with all of the other code. Our musician came up with a cunning plan to remove vibrato on some of the melody tones which did indeed further reduce the memory usage (to 12Kb), but it sounded plainer. After a long evening of trying to come up with a way to do this, analysing the data, looking for patterns, we discovered that if we just compressed the file using a 2Kb compression dictionary window instead of 1Kb we could get the filesize down to 10Kb and keep all of the nice vibrato!

The music is stored in memory compressed, and simply unpacked on demand as we move through the file.

We wanted to add an intro sequence AS WELL as all the music & video. This presented a few challenges too, because memory AND disk space was running short. So in the end, we loaded up all of the intro sequence as separately compressed mode 7 screen grabs, stored in the same memory locations as disk streaming buffers that are later used by the video decompression system. This means the intro data is trashed once the video player starts but that’s ok.

Horsenburger is something of a whizz kid at teletext art (having been an ACTUAL real life teletext artist back in the day) and he kindly offered to help us with some intro screens which I’m sure you’ll agree are pretty awesome.

If all of the above wasn’t enough (and by the time we’d finished cramming the music the video and the intro in, we were running pretty low on free ram and disk space) we wanted to get some credits in too. These were done about 2 days before the teletext block party event, and we’d put together a quick scrolly effect rendering text using a teletext ‘sixels’ font.

Inverse Phase spotted the credits and suggested we put some music on there too! MORE stuff to cram in! 🙂

Well we managed to do that by forcing a quick reload of some data from disk to memory (same memory as the video stream buffer actually), but now but it wasn’t scrolling at 50Hz and there was a lot of raster tearing going on — just a side effect of the 6502’s speed limits. So one last look at the code, and we managed to hack 10 CPU cycles per character off the update loop, and reduce the scroll area by 2 character lines, and voila — 50Hz smooth scrolling!

Источник

Оцените статью