Saturday, May 11, 2013

Frames Capture and Video Creation Using Xuggler

The concept is to open the media file, loop through a specific video stream and at specific intervals capture the corresponding frame, convert it to an image and dump the binary contents into a file. Here is what the code for all these looks like:

package com.javacodegeeks.xuggler;




import java.awt.image.BufferedImage;

import java.io.File;


import java.io.IOException;




import javax.imageio.ImageIO;




import com.xuggle.mediatool.IMediaReader;

import com.xuggle.mediatool.MediaListenerAdapter;


import com.xuggle.mediatool.ToolFactory;

import com.xuggle.mediatool.event.IVideoPictureEvent;


import com.xuggle.xuggler.Global;




public class VideoThumbnailsExample {

     


    public static final double SECONDS_BETWEEN_FRAMES = 10;




    private static final String inputFilename = "c:/Java_is_Everywhere.mp4";

    private static final String outputFilePrefix = "c:/snapshots/mysnapshot";


     

    // The video stream index, used to ensure we display frames from one and


    // only one video stream from the media container.

    private static int mVideoStreamIndex = -1;


     

    // Time of last frame write


    private static long mLastPtsWrite = Global.NO_PTS;

     


    public static final long MICRO_SECONDS_BETWEEN_FRAMES =

        (long)(Global.DEFAULT_PTS_PER_SECOND * SECONDS_BETWEEN_FRAMES);




   public static void main(String[] args) {




        IMediaReader mediaReader = ToolFactory.makeReader(inputFilename);




        // stipulate that we want BufferedImages created in BGR 24bit color space


        mediaReader.setBufferedImageTypeToGenerate(BufferedImage.TYPE_3BYTE_BGR);

         


        mediaReader.addListener(new ImageSnapListener());




        // read out the contents of the media file and

        // dispatch events to the attached listener


        while (mediaReader.readPacket() == null) ;




    }




    private static class ImageSnapListener extends MediaListenerAdapter {




        public void onVideoPicture(IVideoPictureEvent event) {




            if (event.getStreamIndex() != mVideoStreamIndex) {

                // if the selected video stream id is not yet set, go ahead an


                // select this lucky video stream

               if (mVideoStreamIndex == -1)


                    mVideoStreamIndex = event.getStreamIndex();

                // no need to show frames from this video stream


                else

                    return;


            }




            // if uninitialized, back date mLastPtsWrite to get the very first frame

            if (mLastPtsWrite == Global.NO_PTS)


                mLastPtsWrite = event.getTimeStamp() - MICRO_SECONDS_BETWEEN_FRAMES;




            // if it's time to write the next frame

            if (event.getTimeStamp() - mLastPtsWrite >=


                    MICRO_SECONDS_BETWEEN_FRAMES) {

                                 


                String outputFilename = dumpImageToFile(event.getImage());




                // indicate file written

                double seconds = ((double) event.getTimeStamp()) /


                    Global.DEFAULT_PTS_PER_SECOND;

                System.out.printf(


                       "at elapsed time of %6.3f seconds wrote: %s\n",

                        seconds, outputFilename);




                // update last write time


                mLastPtsWrite += MICRO_SECONDS_BETWEEN_FRAMES;

            }




        }


         

        private String dumpImageToFile(BufferedImage image) {


            try {

                String outputFilename = outputFilePrefix +


                     System.currentTimeMillis() + ".png";

               ImageIO.write(image, "png", new File(outputFilename));


               return outputFilename;

            }


            catch (IOException e) {

                e.printStackTrace();


                return null;

            }


        }




    }




}
This might seem a bit overwhelming, but it is really quite straightforward. Let me provide some details for you. We start by creating anIMediaReader from an input file. The media reader is used to read and decode media. Since we wish to manipulate the captures video frames as images, we use the setBufferedImageTypeToGenerate method to denote this. The reader opens up a media container, reads packets from it, decodes the data, and then dispatches information about the data to any registered IMediaListener objects. Here is where our custom class, ImageSnapListener, comes into play.

Our listener extends the 
MediaListenerAdapter, which is an adapter (provides empty methods) implementing the IMediaListener interface. Objects that implement this interface are notified about events generated during the video processing. We only care about handling video events, thus we only implement the IMediaListener.onVideoPicture method. Inside that we use the provided IVideoPictureEvent object to find what stream (video only) we are dealing with.

Since we wish to capture frames in specific times, we have to mess a little with timestamps. First, we make sure we handle the very first frame by checking against the value of the 
Global.NO_PTS constant, which is a value that means no time stamp is set for a given object. Then, if the minimum elapsed time has passes, we capture the frame by invoking the IVideoPictureEvent.getImage method, which returns the underlying BufferedImage. Note that we are talking about elapsed video time and not “real time”. We then dump the image data to a file in PNG format using the ImageIO.write utility method. Finally, we update the last write time.

Let’s run this application in order to see the results. As input file, I am using an old Sun commercial proclaiming that “
Java is Everywhere”. I have downloaded locally the MP4 version provided. Here is what the output console will look like:

at elapsed time of 0.000 seconds wrote: c:/snapshots/mysnapshot1298228503292.png
at elapsed time of 10.010 seconds wrote: c:/snapshots/mysnapshot1298228504014.png
at elapsed time of 20.020 seconds wrote: c:/snapshots/mysnapshot1298228504463.png

at elapsed time of 130.063 seconds wrote: c:/snapshots/mysnapshot1298228509454.png
at elapsed time of 140.007 seconds wrote: c:/snapshots/mysnapshot1298228509933.png
at elapsed time of 150.017 seconds wrote: c:/snapshots/mysnapshot1298228510379.png

The total video time is about 151 seconds so we capture 16 frames.


In order to create video, we will have to take a bit more low level approach in comparison to the 
MediaTool API that we have seen so far. Don’t worry though, it is not going to be complicated. The main idea is that we create a media writer, add some stream information to it, encode our media (the screenshot images), and close the writer. Let’s see the code used to achieve this:

package com.javacodegeeks.xuggler;




import java.awt.AWTException;

import java.awt.Dimension;


import java.awt.Rectangle;

import java.awt.Robot;


import java.awt.Toolkit;

import java.awt.image.BufferedImage;


import java.util.concurrent.TimeUnit;




import com.xuggle.mediatool.IMediaWriter;

import com.xuggle.mediatool.ToolFactory;


import com.xuggle.xuggler.ICodec;




public class ScreenRecordingExample {

     


    private static final double FRAME_RATE = 50;

     


   private static final int SECONDS_TO_RUN_FOR = 20;

     


    private static final String outputFilename = "c:/mydesktop.mp4";

     


    private static Dimension screenBounds;




    public static void main(String[] args) {




        // let's make a IMediaWriter to write the file.

        final IMediaWriter writer = ToolFactory.makeWriter(outputFilename);


         

        screenBounds = Toolkit.getDefaultToolkit().getScreenSize();




        // We tell it we're going to add one video stream, with id 0,


        // at position 0, and that it will have a fixed frame rate of FRAME_RATE.

        writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_MPEG4,


                   screenBounds.width/2, screenBounds.height/2);




        long startTime = System.nanoTime();

         


      for (int index = 0; index < SECONDS_TO_RUN_FOR * FRAME_RATE; index++) {

             


            // take the screen shot

            BufferedImage screen = getDesktopScreenshot();




            // convert to the right image type


            BufferedImage bgrScreen = convertToType(screen,

                   BufferedImage.TYPE_3BYTE_BGR);




            // encode the image to stream #0


            writer.encodeVideo(0, bgrScreen, System.nanoTime() - startTime,

                   TimeUnit.NANOSECONDS);




            // sleep for frame rate milliseconds


            try {

                Thread.sleep((long) (1000 / FRAME_RATE));


           }

            catch (InterruptedException e) {


                // ignore

           }


             

        }


         

        // tell the writer to close and write the trailer if  needed


        writer.close();




    }

     


    public static BufferedImage convertToType(BufferedImage sourceImage, int targetType) {

         


        BufferedImage image;




        // if the source image is already the target type, return the source image

        if (sourceImage.getType() == targetType) {


            image = sourceImage;

        }


        // otherwise create a new image of the target type and draw the new image

        else {


            image = new BufferedImage(sourceImage.getWidth(),

                 sourceImage.getHeight(), targetType);


            image.getGraphics().drawImage(sourceImage, 0, 0, null);

        }




        return image;


         

    }


     

    private static BufferedImage getDesktopScreenshot() {


        try {

            Robot robot = new Robot();


            Rectangle captureSize = new Rectangle(screenBounds);

           return robot.createScreenCapture(captureSize);


        }

        catch (AWTException e) {


            e.printStackTrace();

            return null;


        }

         


    }




}


We start by creating an 
IMediaWriter from a given output file. This class encodes and decodes media, handling both audio and video streams. Xuggler guesses the output format from the file name extension (in our case MP4) and sets some default values appropriately. We then use the addVideoStream method to add a new video stream, providing its index, the codec type used (MPEG-4 here) and the video dimensions. The dimensions are set equal to half of the screen’s dimensions in this example.

Then we execute a loop that runs for a number of times equal to the desired frame rate multiplied with the desired running time. Inside the loop, we generate a screen snapshot as described in the Java2D: Screenshots with Java article. We retrieve the screenshot as a
BufferedImage and convert it to the appropriate type (TYPE_3BYTE_BGR) if it is not already there.

Next, we encode the image to the video stream using the 
IMediaWriter.encodeVideo method. We provide the stream index, the image, the elapsed video time and the time unit. Then, we sleep for the appropriate number of time, depending on the desired frame rate. When the loop is over, we close the writer and write the trailer if necessary, depending on the video format (this is done automatically by Xuggler).

If we execute the application, a video will be created which has recorded your desktop actions.

No comments:

Post a Comment