Quantcast
Channel: Kinect - Processing 2.x and 3.x Forum
Viewing all 530 articles
Browse latest View live

Spout and videoExporter Enabled Kinect Masker

$
0
0

Over the last two weeks I have gone from not knowing much about processing to having a final product, thank you to @GoToLoop, @hamoid and others. Your sketches and contributions to this forum are invaluable!

I give you Body Mapper !!

Attached is a sketch that interfaces with a KinectV1 - The depth image is used to create a mask overlay. User videos are used as textures for projection mapping, or more specifically body mapping. The sketch looks for .mp4 and .mov files in your data directory and allows you to cycle forward and backwards through these videos. When you are ready, you can either export the video via the onboard videoExporter to .mp4 (saved in a directory that you need to create called "savedVideo"), or you can share the frames via Spout (i really wish there was a spout recorder, similar to syphon recorder and there is, cause i made one using Max MSP).

Let me know what you think! I'm sure I have made some strange code, but then again I dont really know what I'm doing and this is very much a learning experience for me.

Enjoy!

N

//    BODY MAPPER

//Cobbled together by Nicolas de Cosson 2016


//
//            SpoutSender
//
//      Send to a Spout receiver
//
//           spout.zeal.co
//
//       http://spout.zeal.co/download-spout/
//
/**
 * Movie Player (v1.21)
 * by GoToLoop  (2014/Oct/31)
 *
 * forum.processing.org/two/discussion/7852/
 * problem-with-toggling-between-multiple-videos-on-processing-2-2-1
 */
/*
  This sketch shows how you can record different takes.
 */

import com.hamoid.*;
import processing.video.Movie;
import spout.*;
import org.openkinect.freenect.*;
import org.openkinect.processing.*;
import org.gstreamer.elements.PlayBin2;
import java.io.FilenameFilter;

static final PlayBin2.ABOUT_TO_FINISH FINISHING = new PlayBin2.ABOUT_TO_FINISH() {
  @ Override public void aboutToFinish(PlayBin2 elt) {
  }
};

//usefull so that we do not overwrite movie files in save directory
int ye = year();
int mo = month();
int da = day();
int ho = hour();
int mi = minute();
int se = second();
//global frames per second
static final float FPS = 30.0;
//index
int idx;
//string array for films located in data directory
String[] FILMS;
//string for weather or not we are exporting to .mp4 using ffmpeg
String record;
boolean isPaused;
boolean recording = false;
// Depth image
PImage depthImg;
// Which pixels do we care about?
int minDepth =  60;
int maxDepth = 800;
//max depth 2048

//declare a kinect object
Kinect kinect;
//declare videoExport
VideoExport videoExport;
//movie array
Movie[] movies;
//movie
Movie m;
// DECLARE A SPOUT OBJECT
Spout spout;


void setup() {
  //I have to call resize becasue for some reason P2D does not
  //seem to to actually size to display width/height on first call
  size(displayWidth, displayHeight, P2D);
  surface.setResizable(true);
  surface.setSize(displayWidth, displayHeight);
  surface.setLocation(0, 0);
  noSmooth();
  frameRate(FPS);
  background(0);

  kinect = new Kinect(this);
  kinect.initDepth();

  // Blank image with alpha channel
  depthImg = new PImage(kinect.width, kinect.height, ARGB);

  // CREATE A NEW SPOUT OBJECT
  spout = new Spout(this);
  //CREATE A NAMED SENDER
  spout.createSender("BodyMapper Spout");

  println("Press R to toggle recording");
  //.mp4 is created with year month date hour minute and second data so we never save over a video
  videoExport = new VideoExport(this, "savedVideo/Video" + ye + mo + da + ho + mi + se + ".mp4");

  videoExport.setFrameRate(15);

  //videoExport.forgetFfmpegPath();
  //videoExport.dontSaveDebugInfo();

  java.io.File folder = new java.io.File(dataPath(""));

  // this is the filter (returns true if file's extension is .mov or .mp4)
  java.io.FilenameFilter movFilter = new java.io.FilenameFilter() {
    String[] exts = {
      ".mov", ".mp4"
    };
    public boolean accept(File dir, String name) {
      name = name.toLowerCase();
      for (String ext : exts) if (name.endsWith(ext)) return true;
      return false;
    }
  };
  //create an array of strings comprised of .mov/.mp4 in data directory
  FILMS = folder.list(movFilter);
  //using the number of videos in data directory we can create array of videos
  movies = new Movie[FILMS.length];

  for (String s : FILMS)  (movies[idx++] = new Movie(this, s))
    .playbin.connect(FINISHING);
  //start us off by playing the first movie in the array
  (m = movies[idx = 0]).loop();
}

void draw() {
  // Threshold the depth image
  int[] rawDepth = kinect.getRawDepth();
  for (int i=0; i < rawDepth.length; i++) {
    if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
      //if pixels are in range then turn them to alpha transparency
      depthImg.pixels[i] = color(0, 0);
    } else {
      //otherwise turn them black
      depthImg.pixels[i] = color(0);
    }
  }
  //update pixels from depth map to reflect change of pixel colour
  depthImg.updatePixels();
  //blur the edges of depth map
  depthImg.filter(BLUR, 1);


  //draw movie to size of current display
  image(m, 0, 0, displayWidth, displayHeight);
  //draw depth map mask to size of current display
  image(depthImg, 0, 0, displayWidth, displayHeight);
  //share image through Spout
  // Sends at the size of the window
  spout.sendTexture();
  //if key r is pressed begin export of .mp4 to save directory
  if (recording) {
    videoExport.saveFrame();
  }
  //TODO - create second window for preferences and instructions
  fill(255);
  text("Recording is " + (recording ? "ON" : "OFF"), 30, 100);
  text("Press r to toggle recording ON/OFF", 30, 60);
  text("Video saved to file after application is closed", 30, 80);
}

void movieEvent(Movie m) {
  m.read();
}

void keyPressed() {
  int k = keyCode;
  if (k == RIGHT) {
    // Cycle forwards
    if (idx >= movies.length - 1) {
      idx = 0;
    } else {
      idx += 1;
    }
  } else if (k == LEFT) {
    // Cycle backwards
    if (idx <= 0) {
      idx = movies.length - 1;
    } else {
      idx -= 1;
    }
  }

  if (k == LEFT || k == RIGHT) {
    m.stop();
    (m = movies[idx]).loop();
    isPaused = false;
    background(0);
  }

  if (key == 'r' || key == 'R') {
    recording = !recording;
    println("Recording is " + (recording ? "ON" : "OFF"));
  }
}

@ Override public void exit() {
  for (Movie m : movies)  m.stop();
  super.exit();
}

applying a mask flips or rotate the image!

$
0
0

Hi. I'm developing in Eclipse. With a kinect (depth sensor).

Currently, I'm trying to mask images. But for some reason as soon I apply a filter, the image flips (or rotates 180º). And I just can't get them back to the current position.

This happens not only with mask() method, but also with other filters (with filter method itself or the blend() one)

I'm giving you an example code:

PGraphics mappingZone;
.. mappingZone = applet.createGraphics(kinect.width, kinect.height, PApplet.P2D);
.. filteredImg = applet.createImage(kinect.width, kinect.height, PApplet.P2D); ...

  mappingZone.beginDraw();
  mappingZone.background(0);
  mappingZone.image(mappingTexture, 0, 0);
  mappingZone.mask(filteredImg);
  mappingZone.endDraw();

  applet.image(mappingZone, 0, 0, applet.width, applet.height);

I tried several things to flip the image to the correct position, like: mappingZone.scale(-1, -1);

But nothing works. However, applying a second filter usually return the image to the correct position. Any clue?

_pic-7016

Kinect color recognition IR

$
0
0

Hi there! I'm new to processing and kinect. I'm trying to get the kinect to track the white color in IR. The sketch runs well but I think it's not tracking the color, since it was supposed to draw a circle around the color. This is the code:

import org.openkinect.freenect.*;
import org.openkinect.freenect2.*;
import org.openkinect.processing.*;
import org.openkinect.tests.*;

Kinect kinect;

color trackColor;
float deg;
boolean ir = false;


void setup() {
  size(640, 520);
  kinect = new Kinect(this);
  kinect.initVideo();
  trackColor = color(255);
  deg = kinect.getTilt();

}


void draw() {
  background(0);
  image(kinect.getVideoImage(), 0, 0);
  fill(255);
  text(
    "Press 'i' to enable/disable between video image and IR image,  " +
    "UP and DOWN to tilt camera   " +
    "Framerate: " + int(frameRate), 10, 515);

    float worldRecord = 500;
    int closestX = 0;
    int closestY = 0;


    if (worldRecord < 10) {
      // Draw a circle at the tracked pixel
      fill(trackColor);
      strokeWeight(4.0);
      stroke(0);
      ellipse(closestX, closestY, 16, 16);
  }
}

void keyPressed() {
  if (key == 'i') {
    ir = !ir;
    kinect.enableIR(ir);
  } else if (key == CODED) {
    if (keyCode == UP) {
      deg++;
    } else if (keyCode == DOWN) {
      deg--;
    }
    deg = constrain(deg, 0, 30);
    kinect.setTilt(deg);
  }
}

This is probably the simplest thing ever, but I'm a noob and I can't figure it out. Could you help? Is there something missing in the code? Thanks a lot!

How to average values of more frames in Processing

$
0
0

I'm working on this code to manage and save data coming from the Microsoft kinect, the data are stored in the int array int[] depthValues, what I'd like to do is to store and save an average of more frames (let's say 10), in order to get smoother data, leaving the remaining part of the code as it is.

Here's the code:

import java.io.File;
import SimpleOpenNI.*;
import java.util.*;
SimpleOpenNI kinect;
void setup()
{
  size(640, 480);
  kinect = new SimpleOpenNI(this);
  kinect.enableDepth();
}
int precedente = millis();
void draw()
{
  kinect.update();
  PImage depthImage = kinect.depthImage();
  image(depthImage, 0, 0);
  int[] depthValues = kinect.depthMap();
  //depthValues = reverse(depthValues);
  StringBuilder sb = new StringBuilder();
  Deque<Integer> row = new LinkedList<Integer>();
  int kinectheight = 770; // kinect distance from the baselevel [mm]
  int scaleFactor = 1;
  int pixelsPerRow = 640;
  int pixelsToSkip = 40;
  int rowNum = 0;
  for (int i = 0; i < depthValues.length; i++) {
    if (i > 0 && i == (rowNum + 1) * pixelsPerRow) {
      fillStringBuilder(sb, row);
      rowNum++;
      sb.append("\n");
      row = new LinkedList<Integer>();
    }
    if (i >= (rowNum * pixelsPerRow) + pixelsToSkip) {
      row.addFirst((kinectheight - depthValues[i]) * scaleFactor);
    }
  }
  fillStringBuilder(sb, row);
  String kinectDEM = sb.toString();
  final String[] txt= new String[1]; //creates a string array of 2 elements
  int savingtimestep = 15000;  // time step in millisec between each saving
  if (millis() > precedente + savingtimestep) {
    txt[0] = "ncols         600\nnrows         480\nxllcorner     0\nyllcorner     0\ncellsize      91.6667\nNODATA_value  10\n" +kinectDEM;
    saveStrings("kinectDEM0.tmp", txt);
    precedente = millis();
    //  delete the old .txt file, from kinectDEM1 to kinectDEMtrash
    File f = new File (sketchPath("kinectDEM1.txt"));
    boolean success = f.delete();

    //  rename the old .txt file, from kinectDEM0 to kinectDEM1
    File oldName1 = new File(sketchPath("kinectDEM0.txt"));
    File newName1 = new File(sketchPath("kinectDEM1.txt"));
    oldName1.renameTo(newName1);
    //  rename kinectDEM0.tmp file to kinectDEM0.txt
    File oldName2 = new File(sketchPath("kinectDEM0.tmp"));
    File newName2 = new File(sketchPath("kinectDEM0.txt"));
    oldName2.renameTo(newName2);

  }
}
void fillStringBuilder(StringBuilder sb, Deque<Integer> row) {
  boolean emptyRow = false;
  while (!emptyRow) {
    Integer val = row.pollFirst();
    if (val == null) {
      emptyRow = true;
    } else {
      sb.append(val);
      val = row.peekFirst();
      if (val != null) {
        sb.append(" ");
      }
    }
  }
}

Infrared object track

$
0
0

Hey everyone! I'm trying to track a white object with the kinect v1 infrared vision. Any idea how to do that? I have two IR leds illuminators pointing to the white object in order to make it more visible to the kinect, but I need it to recognize the object. I've seen a bunch of examples of color tracking, but they don't work with the IR.

I would really appreciate your help. I've spent hours searching for how to do this, but I'm quite new to processing and kinect, so your help would be precious.

Is still posible to use Kinect 1 for xbox with the OpenNi libraries?

$
0
0

Hey there, I'm trying the openNi libraries in Processing but I've found you need to install the binaries, but Apple bought OpenNi, and they're not available in their page, I can't find them anywhere, does anyone knows if there's a way to achieve it? I just want to use the sensor, it's properly installed, this is what the console shows when I run the sketch:

"You are running Processing revision 0227, the latest build is 0250. SimpleOpenNI Version 1.96 After initialization:"

Revision 0250 it's the Processing 3rd, I've ran it there and showed some deprecated functions. :/ Help me please!

Thanks in advance.

How to use a filter (like a mustache or something) on your face with processing

$
0
0

Hi there!

For a school project I want to create a couple of filters in processing. I know how the webcam works in processing yet. But I can't find how I have to import my own filters (some glasses, mustaches or simply a big nose). Also face recognition has to work in processing but I don't know how. Can somebody help me please?

p.s. sorry for my english.......

Help! Kinect works shortly then dies on OS X

$
0
0

Hello,

I have the Kinect 1414 model and am using Processing 3 with the Open Kinect Library.

I have been able to initiate the device's RGB and Depth cameras. However, they only stay active if I remain still in front of the camera. If I start to move too much I get this in the console.

"Got cancelled transfer, but we didn't request it - device disconnected? USB camera marked dead, stopping streams send_cmd: Output control transfer failed (-99) write_register: send_cmd() returned -99 USB camera marked dead, stopping streams"

Has anybody had issues with the device disconnecting randomly?


IndexOutOfBoundsException when trying to use an external webcam

$
0
0

Hi! I'm pretty new to Processing, so I'm sorry if this is really simple to solve but I just don't see it or for other mistakes I'm making.

I made a code for a 'tattoo projection mapping' project, it worked - but only with my built-in webcam. I want to use an external webcam, but I can't seem to change the code without getting an error.

I used the 'GettingStartedCapture' example from the Video library to see what the name of my webcam was. When I run this code separately, everything works.

The only thing I changed in my code is
video = new Capture(this, 320, 240);
to
video = new Capture(this, "name=Microsoft® LifeCam HD-3000,size=640x480,fps=30");

The error I get is 'IndexOutOfBoundsException: Index: 3, Size: 0' on the line opencv.loadImage(video);

Line 65 is the line I've changed, line 84 is the line I get the error on.

Full code (long code, sorry for excess code)

    // docs.opencv.org/master/db/dd6/classcv_1_1RotatedRect.html#gsc.tab=0
    // processing.org/reference/textureMode_.html
    // processing.org/reference/vertex_.html
    // rotatedRect angle calculation: stackoverflow.com/questions/24073127/opencvs-rotatedrect-angle-does-not-provide-enough-information

    // A lot of native OpenCV for java code is used. Mainly because not everything is implemented in the Processing library.

    boolean animationHasBeenStarted;
    import gab.opencv.*;
    import org.opencv.imgproc.Imgproc;
    import org.opencv.core.Core;
    //import org.opencv.imgproc.Moments;

    import org.opencv.core.Mat;
    import org.opencv.core.MatOfPoint;
    import org.opencv.core.MatOfPoint2f;
    import org.opencv.core.MatOfPoint2f;
    import org.opencv.core.CvType;
    import org.opencv.core.RotatedRect;

    import java.awt.Rectangle;

    import org.opencv.core.Point;
    import org.opencv.core.Size;

    import org.opencv.core.Scalar;

    import processing.video.*;
    Movie tattooImg;
    //ArrayList<Contour> contours;
    //ArrayList<MatOfPoint> contours;
    //ArrayList<MatOfPoint2f> approximations;
    //ArrayList<MatOfPoint2f> markers;

    //ArrayList<PVector> hierarchyVectors;

    PImage src, dst;
    Mat hierarchy;

    //ArrayList<Contour> polygons;
    //ArrayList<Moments> mu;

    MatOfPoint largestContoursMat;

    //ArrayList<Contour> contours;
    //ArrayList<Contour> polygons;

    OpenCV opencv;

    Mat workMat;

    double largest_area = 0.0;

    Capture video;

    PImage maskImg;

    RotatedRect rRect;


    void setup() {

      size(320, 240, P2D);

      video = new Capture(this, "name=Microsoft® LifeCam HD-3000,size=640x480,fps=30");
      video.start();

      opencv = new OpenCV( this, video.width, video.height);
      opencv.useColor();

      maskImg = createImage(opencv.width, opencv.height, RGB);

      tattooImg = new Movie(this, "Tattoo 2.mov");
      //tattooImg.loop();
      //tattooImg.speed(1.5);
    }

    void draw() {
      //image(tattooImg, mouseX, mouseY);

      if (video.available()) {
        video.read();
        //markerDetector.processFrame(video, true);
        opencv.loadImage(video);

        // call process function
        processWithOpenCV();
      }

      image( opencv.getOutput(), 0, 0 );
      // image( maskImg,320,0);

      // draw some things on top of the image
      // only when we have found the largestContour.
      // and when the area size is above a certain threshold
      if (largestContoursMat != null && largest_area > 2500.0) {

        if (animationHasBeenStarted == false)
        {
          tattooImg.play();
          tattooImg.speed(0.2);

          animationHasBeenStarted = true;
        }


        //strokeWeight(2);
        //stroke(255,0,0);
        //noFill();

        noStroke();

        beginShape();
        //textureMode(NORMAL);
        texture(tattooImg);

        Point[] vertices = new Point[4];
        rRect.points(vertices);
        //vertices[4] = vertices[0];

        //Point[] points = largestContoursMat.toArray();
        //Point[] points = contoursMat.get();

        //for (int j = 0; j < vertices.length; j++) {
        //  vertex((float)vertices[j].x, (float)vertices[j].y);
        //}

        vertex((float)vertices[0].x, (float)vertices[0].y, 0, 0);
        vertex((float)vertices[1].x, (float)vertices[1].y, tattooImg.width, 0);
        vertex((float)vertices[2].x, (float)vertices[2].y, tattooImg.width, tattooImg.height);
        vertex((float)vertices[3].x, (float)vertices[3].y, 0, tattooImg.height);

        endShape();

        float blob_angle_deg = (float) rRect.angle;
        if (rRect.size.width < rRect.size.height) {
          blob_angle_deg = 90 + blob_angle_deg;
        }

        //text(blob_angle_deg, 10,10);

        noFill();
        //strokeWeight(2);
        //stroke(0,0,255);

        //beginShape();

        //MatOfPoint c = contoursMat.get(largest_contour_index);
        //Point[] points = largestContoursMat.toArray();
        //Point[] points = contoursMat.get();

        //for (int j = 0; j < points.length; j++) {
        //  vertex((float)points[j].x, (float)points[j].y);
        // }
        // endShape();


        //pushMatrix();
        //  rotate(radians(blob_angle_deg));
        //  translate((float)vertices[0].x, (float)vertices[0].y);
        //  scale( (float) (rRect.size.width/tattooImg.width), (float)(rRect.size.height/tattooImg.height));
        //  image(tattooImg,0,0);
        //popMatrix();
      } else {
        animationHasBeenStarted = false;
      }
    }
    void movieEvent(Movie m) {
      m.read();
    }

    void processWithOpenCV() {

      // create the matrix in the size of the input image
      // can this be done faster?
      Mat workMat  = OpenCV.imitate(opencv.getColor());

      // here we put the video image in the matrix.
      OpenCV.toCv(video, workMat);
      // switch colors
      OpenCV.ARGBtoBGRA(workMat, workMat);

      // convert to YCrCb
      Imgproc.cvtColor(workMat, workMat, Imgproc.COLOR_BGR2YCrCb);

      // check skin range
      Core.inRange(workMat, new Scalar(0, 133, 77), new Scalar(255, 173, 127), workMat);

      // eliminate noise with erode and dilate
      // http://www.tutorialspoint.com/java_dip/eroding_dilating.htm
      int erosion_size = 4;
      int dilation_size = 4;

      Mat element = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new  Size(2*erosion_size + 1, 2*erosion_size+1));
      Imgproc.erode(workMat, workMat, element);

      Mat element1 = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new  Size(2*dilation_size + 1, 2*dilation_size+1));
      Imgproc.dilate(workMat, workMat, element1);

      // blur it a bit
      Imgproc.GaussianBlur(workMat, workMat, new Size(5, 5), 0);

      maskImg = opencv.getSnapshot(workMat);

      //put the matrix in our opencv object, just for display
      //opencv.setGray(workMat);

      Mat hierarchyMat = new Mat();
      ArrayList<MatOfPoint> contoursMat = new ArrayList<MatOfPoint>();

      Imgproc.findContours(workMat, contoursMat, hierarchyMat, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);

      // reset the global largest_area
      largest_area = 0.0;
      int    largest_contour_index = 0;

      if (contoursMat.size() > 0) {

        for ( int i = 0; i< contoursMat.size(); i++) {

          MatOfPoint c = contoursMat.get(i);

          double a = Imgproc.contourArea(c); //,false);  //  Find the area of contour

          if (a > largest_area) {
            largest_area = a;
            largest_contour_index = i;                //Store the index of largest contour
          }
        }

        //println(largest_area);

        //Convert contours(i) from MatOfPoint to MatOfPoint2f
        MatOfPoint2f contourMMOP2f = new MatOfPoint2f();

        // get the largest Contour and get the RotatedRect from it.
        largestContoursMat = contoursMat.get(largest_contour_index);
        contoursMat.get(largest_contour_index).convertTo(contourMMOP2f, CvType.CV_32FC2);

        rRect = Imgproc.minAreaRect(contourMMOP2f);
      }
    }

Flickering image when trying to load multiple videos

$
0
0

Hello, When I try to load multiple videos using keyPressed, my videos start flickering. I'm not even sure if this is the right way to switch between videos, but it's working for me except for the flickering. I already have a trigger (the amount of skin colour that is in the screen) for the videos to start playing, the only thing that the keyPressed does is switch between videos. This is the part that (I think) is causing the problem:

      if(animationHasBeenStarted == false)
        { {
      if (keyPressed) {
      if (key == '1') {
      tattooImg.stop();
      tattooImg = new Movie(this, "Tattoo 2.mov");
      tattooImg.play();}
      if (key == '2') {
      tattooImg.stop();
      tattooImg  = new Movie(this, "Tattoo 3.mov");
      tattooImg.play();}
      }

          tattooImg.play();
          tattooImg.speed(0.2);
          tattooImg.noLoop();
          animationHasBeenStarted = true;
        }

And this is the full code:

        //docs.opencv.org/master/db/dd6/classcv_1_1RotatedRect.html#gsc.tab=0
        //processing.org/reference/textureMode_.html
        //processing.org/reference/vertex_.html
        // rotatedRect angle calculation: stackoverflow.com/questions/24073127/opencvs-rotatedrect-angle-does-not-provide-enough-information

        // A lot of native OpenCV for java code is used. Mainly because not everything is implemented in the Processing library.

        boolean animationHasBeenStarted;
        import gab.opencv.*;
        import org.opencv.imgproc.Imgproc;
        import org.opencv.core.Core;
        //import org.opencv.imgproc.Moments;

        import org.opencv.core.Mat;
        import org.opencv.core.MatOfPoint;
        import org.opencv.core.MatOfPoint2f;
        import org.opencv.core.MatOfPoint2f;
        import org.opencv.core.CvType;
        import org.opencv.core.RotatedRect;

        import java.awt.Rectangle;

        import org.opencv.core.Point;
        import org.opencv.core.Size;

        import org.opencv.core.Scalar;

        import processing.video.*;

        Movie tattooImg;
        //ArrayList<Contour> contours;
        //ArrayList<MatOfPoint> contours;
        //ArrayList<MatOfPoint2f> approximations;
        //ArrayList<MatOfPoint2f> markers;

        //ArrayList<PVector> hierarchyVectors;

        PImage src, dst;
        Mat hierarchy;

        //ArrayList<Contour> polygons;
        //ArrayList<Moments> mu;

        MatOfPoint largestContoursMat;

        //ArrayList<Contour> contours;
        //ArrayList<Contour> polygons;

        OpenCV opencv;

        Mat workMat;

        double largest_area = 0.0;

        Capture video;

        PImage maskImg;

        RotatedRect rRect;


        void setup() {

          size(320, 240, P2D);
          String[] cameras = Capture.list();
          //video = new Capture(this, 320, 240);
          video = new Capture(this, 320, 240, cameras[30]);
          video.start();

          opencv = new OpenCV( this, video.width, video.height);
          //opencv.useColor();

          maskImg = createImage(opencv.width, opencv.height, RGB);
          tattooImg = new Movie(this, "Tattoo 1.mov");

        }




        void draw() {
          background(255);
          //image(tattooImg, mouseX, mouseY);
          if (video.available()) {
            video.read();
            //markerDetector.processFrame(video, true);
            //println(video.height);
            //PImage test = video;
            opencv.loadImage(video);

            // call process function
            processWithOpenCV();

          }



          //image(video,0,0);
          //image( opencv.getOutput(), 0, 0 );
          //image( maskImg,320,0);

          // draw some things on top of the image
          // only when we have found the largestContour.
          // and when the area size is above a certain threshold
          if(largestContoursMat != null && largest_area > 2500.0) {

            if(animationHasBeenStarted == false)
            { {
          if (keyPressed) {
          if (key == '1') {
          tattooImg.stop();
          tattooImg = new Movie(this, "Tattoo 2.mov");
          tattooImg.play();}
          if (key == '2') {
          tattooImg.stop();
          tattooImg  = new Movie(this, "Tattoo 3.mov");
          tattooImg.play();}
          }

              tattooImg.play();
              tattooImg.speed(0.2);
              tattooImg.noLoop();
              animationHasBeenStarted = true;
            }


            //strokeWeight(2);
            //stroke(255,0,0);
            //noFill();

            noStroke();

            beginShape();
              //textureMode(NORMAL);
              texture(tattooImg);

              Point[] vertices = new Point[4];
              rRect.points(vertices);
              //vertices[4] = vertices[0];

              //Point[] points = largestContoursMat.toArray();
              //Point[] points = contoursMat.get();

              //for (int j = 0; j < vertices.length; j++) {
              //  vertex((float)vertices[j].x, (float)vertices[j].y);
              //}

              vertex((float)vertices[0].x, (float)vertices[0].y, 0,               0);
              vertex((float)vertices[1].x, (float)vertices[1].y, tattooImg.width, 0);
              vertex((float)vertices[2].x, (float)vertices[2].y, tattooImg.width, tattooImg.height);
              vertex((float)vertices[3].x, (float)vertices[3].y, 0,               tattooImg.height);

            endShape();

            float blob_angle_deg = (float) rRect.angle;
            if (rRect.size.width < rRect.size.height) {
              blob_angle_deg = 90 + blob_angle_deg;
            }

            //text(blob_angle_deg, 10,10);

            noFill();
            //strokeWeight(2);
            //stroke(0,0,255);

            //beginShape();

            //MatOfPoint c = contoursMat.get(largest_contour_index);
            //Point[] points = largestContoursMat.toArray();
            //Point[] points = contoursMat.get();

            //for (int j = 0; j < points.length; j++) {
            //  vertex((float)points[j].x, (float)points[j].y);
           // }
           // endShape();


            //pushMatrix();
            //  rotate(radians(blob_angle_deg));
            //  translate((float)vertices[0].x, (float)vertices[0].y);
            //  scale( (float) (rRect.size.width/tattooImg.width), (float)(rRect.size.height/tattooImg.height));
            //  image(tattooImg,0,0);
            //popMatrix();

          }
          else {
           animationHasBeenStarted = false;
          }

          }}
        void movieEvent(Movie m) {
          m.read();
        }

        void processWithOpenCV() {

          // create the matrix in the size of the input image
            // can this be done faster?
            Mat workMat  = OpenCV.imitate(opencv.getColor());

            // here we put the video image in the matrix.
            OpenCV.toCv(video, workMat);
            // switch colors
            OpenCV.ARGBtoBGRA(workMat,workMat);

            // convert to YCrCb
            Imgproc.cvtColor(workMat, workMat, Imgproc.COLOR_BGR2YCrCb);

            // check skin range
            Core.inRange(workMat, new Scalar(0, 133, 77), new Scalar(255,173,127), workMat);

            // eliminate noise with erode and dilate
            // http://www.tutorialspoint.com/java_dip/eroding_dilating.htm
            int erosion_size = 0;
            int dilation_size = 0;

            Mat element = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new  Size(2*erosion_size + 1, 2*erosion_size+1));
            Imgproc.erode(workMat, workMat, element);

            Mat element1 = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new  Size(2*dilation_size + 1, 2*dilation_size+1));
            Imgproc.dilate(workMat, workMat, element1);

            // blur it a bit
            Imgproc.GaussianBlur(workMat, workMat, new Size(5, 5), 0);

            maskImg = opencv.getSnapshot(workMat);

            //put the matrix in our opencv object, just for display
            //opencv.setGray(workMat);

            Mat hierarchyMat = new Mat();
            ArrayList<MatOfPoint> contoursMat = new ArrayList<MatOfPoint>();

            Imgproc.findContours(workMat, contoursMat, hierarchyMat, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);

            // reset the global largest_area
            largest_area = 0.0;
            int    largest_contour_index = 0;

            if(contoursMat.size() > 0) {

              for( int i = 0; i< contoursMat.size(); i++) {

                  MatOfPoint c = contoursMat.get(i);

                  double a = Imgproc.contourArea(c); //,false);  //  Find the area of contour

                  if(a > largest_area) {
                    largest_area = a;
                    largest_contour_index = i;                //Store the index of largest contour
                  }
              }

              //println(largest_area);

              //Convert contours(i) from MatOfPoint to MatOfPoint2f
              MatOfPoint2f contourMMOP2f = new MatOfPoint2f();

              // get the largest Contour and get the RotatedRect from it.
              largestContoursMat = contoursMat.get(largest_contour_index);
              contoursMat.get(largest_contour_index).convertTo(contourMMOP2f, CvType.CV_32FC2);

              rRect = Imgproc.minAreaRect(contourMMOP2f);

            }

        }

Is it possible to make this with webcam instead of kinect and how?

$
0
0

// import library import SimpleOpenNI.*; // declare SimpleOpenNI object SimpleOpenNI context;

// PImage to hold incoming imagery PImage cam;

void setup() { // same as Kinect dimensions size(640, 480); // initialize SimpleOpenNI object context = new SimpleOpenNI(this); if (!context.enableScene()) { // if context.enableScene() returns false // then the Kinect is not working correctly // make sure the green light is blinking println("Kinect not connected!"); exit(); } else { // mirror the image to be more intuitive context.setMirror(true); } }

void draw() { // update the SimpleOpenNI object context.update(); // put the image into a PImage cam = context.sceneImage().get(); // display the image image(cam, 0, 0); }

SimpleOpenNI stopped working

$
0
0

Hi

I have previously had a lot of Kinect sketches using the SimpleOpenNI library running on my computer, but haven't opened them for half a year. Yesterday I tried some of them and some of the examples that comes with SimpleOpenNI

At first they just seem to not run: No new sketch window opens up, but a Java app runs in the background with the name of the sketch.

The console prints out "SimpleOpenNI Version 1.96".

After 5 minutes I get a pop-up message that the sketch has unexpectedly quit and in the console it gives me this:

*************
SimpleOpenNI Version 1.96
send_cmd: Data buffer is 322 bytes long, but got 334 bytes
After initialization:

freenect_fetch_zero_plane_info: send_cmd read 334 bytes (expected 322)
freenect_camera_init(): Failed to fetch zero plane info for device
libc++abi.dylib: terminating with uncaught exception of type std::runtime_error: Cannot open Kinect
Could not run the sketch (Target VM failed to initialize).
Make sure that you haven't set the maximum available memory too high.
For more information, read revisions.txt and Help → Troubleshooting.
************

I am using Processing 2.2.1 and Mac OS 10.11.2. My Kinect is model 1473. I tried lowering the maximum available memory without luck.

Anybody has a clue about what is going / how to fix it?

Cheers

How to make SimpleOpenNI to work with Kinect and Processing

$
0
0

Hi everyone! I'm a complete beginner in Kinect - Processing integration and I stumbled to some issues. Trying to follow the book "Making things see" by Greg Borenstein and stumbled upon a problem of getting to run anything that involves SimpleOpenNI library. I'm using Processing 2.2.1 and SimpleOpenNI 1.96. Bellow I put code I'm trying to run and error message. If someone can give me an advice how to proceed I would be very grateful.

This is the sample code from the begining of the book and I was not able to get it running.

import SimpleOpenNI.*;
SimpleOpenNI kinect;

void setup() {
     size(640*2, 480);
  kinect = new SimpleOpenNI(this);

  kinect.enableDepth();
  kinect.enableRGB();

}

void draw(){
  kinect.update();
  image(kinect.depthImage(), 0, 0);
  image(kinect.rgbImage(), 640, 0);
}

The error message is: SimpleOpenNI Version 1.96

A fatal error has been detected by the Java Runtime Environment:

SIGILL (0x4) at pc=0x00000001a623d0b4, pid=726, tid=57099

JRE version: Java(TM) SE Runtime Environment (7.0_55-b13) (build 1.7.0_55-b13) Java VM: Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode bsd-amd64 compressed oops) Problematic frame: C [libfreenect.0.1.2.dylib+0x40b4] freenect_camera_init+0x178

Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again

An error report file with more information is saved as: /Users/zeljko/Documents/Processing/libraries/SimpleOpenNI/library/osx/hs_err_pid726.log

If you would like to submit a bug report, please visit: http://bugreport.sun.com/bugreport/crash.jsp The crash happened outside the Java Virtual Machine in native code. See problematic frame for where to report the bug.

After initialization:

Could not run the sketch (Target VM failed to initialize). For more information, read revisions.txt and Help → Troubleshooting.

SimpleOpenNi Libraries

$
0
0

Where can I download the SimpleOpenNI library? https://code.google.com/archive/p/simple-openni closed in 2015. Is this library mirrored somewhere trustworthy?

I ask because KinectPV2 keeps throwing this error in both 32 & 64bit versions of processing:

64 windows 7
A library relies on native code that's not available.
Or only works properly when the sketch is run as a 32-bit application

and Shiffman's Open Kinect for Processing doesn't do skeleton tracking.

Thanks for any help.

how to make the image appear for a few moment ?

$
0
0

I have this code but I can't figure how to make the image appear for more time in the screen .. So it should not disappear quickly ..Can you help please?

for (int i = 0; i < fullbody.length; i++) {
  println(fullbody[i].x + "," + fullbody[i].y);
  image(myImageArray[(int) random(10)],fullbody[i].x, fullbody[i].y, fullbody[i].width, fullbody[i].height);
  smooth();
  delay(10);
}

How to record Kinect Depth Data and how to work with this offline data without Kinect ????

$
0
0

I am doing a project using Kinect/Processing for posture detection. I want to record the depth data using the Kinect Sensor with the 'processing' code and work with those data offline ie. without having the kinect in hand. Is it possible with Processing. Already I had found a FAKENECT library for this purpose, but it was so cumbersome. So, I need help from you guys...

Smoothing Kinect Depth Data

$
0
0

Hello everyOne,

http://www.codeproject.com/Articles/317974/KinectDepthSmoothing,, from this post, m trying to make smooth kinect depth data by implementing the code in processing, But it's not giving me an improving result.

I know , I made a several mistakes , but cant make them working.

I have created a method of -createImageFromDepthArray- and -createSmoothImageFromDepthArray- , but I can not display the image using this methods...

Here is my code:

import SimpleOpenNI.*;

import java.util.Queue;
import java.util.ArrayDeque;
SimpleOpenNI kinect;
public Queue<int[]> averageQueue=new ArrayDeque();
boolean useFiltering;
int innerBandThreshold;
int outerBandThreshold;
boolean userAverage;
int averageFrameCount;

int totalFrames;
int lastFrames;

int RedIndex=2;
int GreenIndex=1;
int BlueIndex=0;

int MaxDepthDistance=4000;
int MinDepthDistance=850;
int MaxDepthDistanceOffset=3150;
PImage r_image;
void setup(){
  size(320,240);
  kinect=new SimpleOpenNI(this);
  kinect.enableDepth();
  kinect.enableRGB();

}
void draw(){
  kinect.update();
  r_image=kinect.depthImage();
  createImageFromDepthImage(r_image);
  createSmoothImageFromDepthArray(r_image);
  image(r_image,0,0);
}
public int CalculateDistanceFromDepth(int first, int second){
  return first|second<<8;
}
public byte CalculateIntensityFromDistance(int distance){
  int newMax=distance-MinDepthDistance;
  if(newMax>0){
    return (byte)(255-(255*newMax/(MaxDepthDistanceOffset)));
  }else{
    return (byte)255;
  }
}
public int[] createDepthArray(PImage image){
 int[] returnArray=new int[image.width*image.height];
 //byte[] depthFrame=kinect.depthMap();
 int[] depthFrame=kinect.depthMap();

 for(int y=0; y<640; y+=2){
   for(int x=0; x<240; x++){
     int depthIndex=y+(x*640);
     int index=depthIndex/2;

     returnArray[index]=CalculateDistanceFromDepth(depthFrame[depthIndex],depthFrame[depthIndex+1]);
   }
 }
 return returnArray;
}


public int[] ceateAverageDepthArray(int[] depthArray){
 averageQueue.add(depthArray);

 CheckForDequeue();

 int[] sumDepthArray=new int[depthArray.length];
 int[] averageDepthArray=new int[depthArray.length];

 int denominator=0;
 int count=1;

  for(int[] item : averageQueue)
   for(int y=0;y<320;y++){
     for(int x=0;x<240;x++){
       int index=y+(x*320);
       sumDepthArray[index]+=item[index]*count;

     }
   }

    denominator+=count;
    count++;

// }

 for(int y1=0;y1<320;y1++){
     for(int x1=0;x1<240;x1++){
       int index1=y1+(x1*320);
       averageDepthArray[index1]=(short)(sumDepthArray[index1]/denominator);

     }
   }

   return averageDepthArray;




}

public void CheckForDequeue(){
  if(averageQueue.size()>averageFrameCount){
    averageQueue.remove();
    CheckForDequeue();
  }
}

public int[] createFilteredDepthArray(int[] depthArray, int width, int height){

  int[] smoothDepthArray=new int[depthArray.length];

  int widthBound=width-1;
  int heightBound=height-1;

  for(int y2=0;y2<320;y2++){
    for(int x2=0;x2<240;x2++){
    int depthIndex=y2+(x2*320);
    if(depthArray[depthIndex]==0){
      int a=depthIndex%320;
      int b=(depthIndex-a)/320;

      int[][]filterCollection=new int[24][2];

      int innerBandCount=0;
      int outerBandCount=0;

      for(int yi=-2; yi<3; yi++ ){
        for(int xi=-2; xi<3;xi++){
          if(xi !=0 || yi!=0){
            int xSearch=a+xi;
            int ySearch=b+yi;

            if(xSearch>=0 && xSearch<=widthBound && ySearch>=0 && ySearch<=heightBound){
              int index_s=xSearch+(ySearch*width);
              if(depthArray[index_s]!=0){
                for(int i=0;i<24;i++){
                  if(filterCollection[i][0]==depthArray[index_s]){
                    filterCollection[i][1]++;
                    break;
                  }else if(filterCollection[i][0]==0){
                    filterCollection[i][1]=depthArray[index_s];
                    filterCollection[i][1]++;
                  }
                }
                if(yi!=2 && yi!=-2 && xi!=2 && xi!=-2){
                  innerBandCount++;
                }else{
                  outerBandCount++;
                }
              }
            }
          }
        }

        if(innerBandCount>=innerBandThreshold || outerBandCount>=outerBandThreshold){
          int frequency=0;
          int depth=0;

          for(int i=0;i<24;i++){
            if(filterCollection[i][0]==0){
              break;
            }
            if(filterCollection[i][1]>frequency){
              depth=filterCollection[i][0];
              frequency=filterCollection[i][0];
            }
          }

          smoothDepthArray[depthIndex]=depth;
        }else{
          smoothDepthArray[depthIndex]=depthArray[depthIndex];
        }
      }

    }
  }

}
        return smoothDepthArray;

}


public PImage createImageFromDepthImage(PImage image){
  int width=image.width;
  int height=image.height;

 int[] depthFrame=kinect.depthMap();
 int[] colorFrame = new int[width * height * 4];

 for(int y=0;y<640;y+=2){
   for(int x=0;x<240;x++){
     int depthIndex=y+(x*640);

     int index=depthIndex*2;
     int distance=CalculateDistanceFromDepth(depthFrame[depthIndex],depthFrame[depthIndex+1]);
     int intensity=CalculateIntensityFromDistance(distance);

     colorFrame[index+BlueIndex]=intensity;
     colorFrame[index+GreenIndex]=intensity;
     colorFrame[index+RedIndex]=intensity;
   }
 }
 return image;

}

public PImage createSmoothImageFromDepthArray(PImage image){
  int width=image.width;
  int height=image.height;

  int[] depthArray=createDepthArray(image);
  if(useFiltering){
    depthArray=createFilteredDepthArray(depthArray,width,height);
  }
  if(userAverage){
    depthArray=ceateAverageDepthArray(depthArray);
  }
  int[] colorBytes=createColorBytesFromDepthArray(depthArray,width,height);
  return image;
}

public int[] createColorBytesFromDepthArray(int[] depthArray,int width,int height){
  int[] colorFrame=new int[width*height*4];

  for(int y=0;y<320;y++){
    for(int x=0;x<240;x++){
      int distanceIndex=y+(x*320);
      int index=distanceIndex*4;
      int intensity=CalculateIntensityFromDistance(depthArray[distanceIndex]);

      colorFrame[index+BlueIndex]=intensity;
      colorFrame[index+GreenIndex]=intensity;
      colorFrame[index+RedIndex]=intensity;
    }
  }

  return colorFrame;
}

Can anyone please help me for this?

working offline with depth data.

$
0
0

Hi, I have some video including RGB data and depth data that are tacked from a Kinect camera. How can I get distance of each pixel from camera. I need the distance of some pixel from floor. Thanks.

class parameters

$
0
0

hello i have taken the standar particle system with the texture,and i want to give the location of the mouse in draw function rather inside the class because i want to connect it with kinect,tried it but the result was strange,i tried to move the acceleration and the velocity,the result is better but it is still strange.i tried to move also the the lifespan parameter but it is also strange. Im thinking that is something with the add particle function and the iterator,but im not very good with tha concept,to figure out the problem.I will be very thankful if you can help me.Thank you in advance

here is the code:

import java.util.Iterator;
ParticleSystem ps;
PVector location;
PVector acceleration;
PVector velocity;
float lifesp = 255;
float _lifesp = 2.0;

void setup(){
 size(640,360,P2D);
PImage image = loadImage("texture.png");
ps = new ParticleSystem(image);


}

void draw(){
 background(0);
 //ps.applyForce();
 acceleration = new PVector(0, 0.05);
    velocity = new PVector(random(-1, 1), random(-2, 0));
 PVector location = new PVector (mouseX,mouseY);
 lifesp -= _lifesp;
ps.run(location, acceleration, velocity,lifesp );
ps.addParticle();


}

the particle class:

class Particle {


  PVector loc;
  PVector location;

  float lifespan;
PImage image;




  Particle(PImage img) {



    image = img;
  }

  void run(PVector location ,PVector _acceleration, PVector _velocity,float _lifespan) {

    update(location,_acceleration,_velocity);
    display(location,_lifespan);
  }

  void update(PVector _location, PVector _acceleration, PVector _velocity) {

    velocity.add(_acceleration);
    _location.add(_velocity);

  }


  void display(PVector _location, float _lifespan) {

    stroke(0, lifespan);
    imageMode(CENTER);
    tint(255,lifespan);
    image(image,_location.x,_location.y);
    //fill(175, lifespan);
    //ellipse(location.x, location.y, 8, 8);
  }

  boolean isDead(float _lifespan) {
    lifespan = _lifespan;
    if (_lifespan < 0.0) {
      return true;
    } else {
      return false;
    }
  }
}

and the particle system class:

class ParticleSystem{

 ArrayList<Particle> particles;
PVector origin;

//PVector _loc;

PImage image;

ParticleSystem(PImage _img){

  particles = new ArrayList<Particle>();
  image = _img;

}

  void addParticle(){
   particles.add(new Particle(image));


  }

  void run(PVector location, PVector acceleration, PVector velocity,float _lifespan){
   Iterator<Particle> it = particles.iterator();
  while (it.hasNext()){

   Particle p = it.next();
   p.run(location,acceleration,velocity, _lifespan);
   if(p.isDead(_lifespan)){
     println("khgkhk");
    it.remove();
   }
  }

  }


}`

Rectangle localization on an image

$
0
0

Hello everyone,

I'm a new user of processing, discovered this great thing a couple of weeks ago, and already enjoying it. I have a question about image processing.

The thing I have to do, is to detect (and localize on the picture) a set of rectangles.

Below is a drawing of what i would like to do : Capture_ezra

I tried to detect lines with hough lines detection algorithm, but it doesn't manage to do it.

Of course I just have to invert the picture, I don't have to do an edge detection (canny) as the images already shows the edges for the rectangles to be located.

Here is what I came up with so far.

void rect_detect(){
  PImage src = loadImage("Lignes_i.jpg");
  src.resize(width,height);
  //surface.setSize(src.width,src.height);

  opencv = new OpenCV(this,src);
  //opencv.findCannyEdges(20,75);

  // Arguments are: threshold, minLengthLength, maxLineGap
  lines = opencv.findLines(100, 30, 20);
}

I don't know if the reason is that the lines are not perfectly vertical or horizontal, but the detection is irrelevant. Capture_ezra_2

Viewing all 530 articles
Browse latest View live