Quantcast
Channel: Kinect - Processing 2.x and 3.x Forum
Viewing all 530 articles
Browse latest View live

How do I resize the ouput of kinect.getColorImage()? Windows PC/process 3

$
0
0

I am trying to use the processing Kinect library with another AR tool kit; the AR toolkits only takes in 640 x 480 input sizes, so the kinect's colorImage output's 1920 X 1080 was not compatible to be used due to an ArrayOutOfBoundsException. I am trying to resize the output of kinect.getColorImage() but keeps on getting ArrayOutOfBoundsException; any help on what I'm doing wrong, or suggested solutions would be greatly appreciated!

        void draw()
        {
          img = kinect.getColorImage();
          img.resize(640,480);

          //do stuff with the kinect's output with the following AR library
          nya.detect(img);
         ....
        }

SimpleOpenNi Libraries

$
0
0

Where can I download the SimpleOpenNI library? https://code.google.com/archive/p/simple-openni closed in 2015. Is this library mirrored somewhere trustworthy?

I ask because KinectPV2 keeps throwing this error in both 32 & 64bit versions of processing:

64 windows 7
A library relies on native code that's not available.
Or only works properly when the sketch is run as a 32-bit application

and Shiffman's Open Kinect for Processing doesn't do skeleton tracking.

Thanks for any help.

How to connect Kinect xbox 360 on windows 10 for processing?

$
0
0

How to connect Kinect xbox 360 on windows 10 for processing?

Photo timer, can't figure it out

$
0
0

Hi there,

I am working to set up a "look away photo booth" with face tracking in OpenCV. Basically: Sees face, face looks away, counts to 10, takes picture. If there is no face, it just waits for one, muwahahaha.

Now, I sort of figured out how to make a switch (if there's a more elegant solution, please do share!) that triggers when a face has been present for more than a few seconds:

/*
*/

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

boolean isFace = false;
boolean wasFace = false; //boolean to see if there was a face
int prevMillis;
int count;
int numPics;

int isFaceTimer;
int goneFaceTimer;


Capture video;
OpenCV opencv;
OpenCV photocv;
PFont font;


void setup() {
  size(640, 480);
  numPics = 0;
  video = new Capture(this, 640/2, 480/2);
  opencv = new OpenCV(this, 640/2, 480/2);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

  video.start();


}

void draw() {
  scale(2);
  opencv.loadImage(video);

  image(video, 0, 0 );

  noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  Rectangle[] faces = opencv.detect();
  println(faces.length);

  for (int i = 0; i < faces.length; i++) {
    //println(faces[i].x + "," + faces[i].y);
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    isFace = true;
  }
  if(faces.length == 0){
    isFace = false;
  }
  if (isFace){
    wasFace = false;

    goneFaceTimer=0;
    println("THERE IS A FACE HERE");
    isFaceTimer ++;
    println("FACE TIMER"+isFaceTimer);
  }
  else {
     println("NO NO FACE");
     goneFaceTimer ++;
     println("NO FACE FOR THIS LONG"+goneFaceTimer);
      //check that the face was there long enough. Then count to 10
     if( isFaceTimer > 2 && goneFaceTimer > 0 ){
        println("AAAAAAAA");
        wasFace = true;
     }

     isFaceTimer = 0;
   }

if (wasFace){
  if (numPics <= 15) {
            takePic();
         }
       }
    wasFace = false;

}


void captureEvent(Capture c) {
  c.read();
}
void takePic() {
  saveFrame("photobooth"+numPics+".jpg");
  println ("pics taken"+(numPics+1));

  delay(600);

  numPics += 1;


}

My problem is, this triggers the photo when I set it to 0::

  else {
             println("NO NO FACE");
             goneFaceTimer ++;
             println("NO FACE FOR THIS LONG"+goneFaceTimer);
              //check that the face was there long enough. Then count to 10
             if( isFaceTimer > 2 && goneFaceTimer >0 ){
                println("AAAAAAAA");
                wasFace = true;
             }
             isFaceTimer = 0;
           }

but if I try to set it to 10 or any other number, it doesn't take a photo this doesn't work:

  else {
             println("NO NO FACE");
             goneFaceTimer ++;
             println("NO FACE FOR THIS LONG"+goneFaceTimer);
              //check that the face was there long enough. Then count to 10
             if( isFaceTimer > 2 && goneFaceTimer > 10 ){
                println("AAAAAAAA");
                wasFace = true;
             }
             isFaceTimer = 0;
           }

nor this:

  else {
             println("NO NO FACE");
             goneFaceTimer ++;
             println("NO FACE FOR THIS LONG"+goneFaceTimer);
              //check that the face was there long enough. Then count to 10
             if( isFaceTimer > 2 && goneFaceTimer >= 10 ){
                println("AAAAAAAA");
                wasFace = true;
             }
             isFaceTimer = 0;
           }

I can't seem to figure it out, and I know it has to be something simple---any help would be much appreciated!

Make the animation to move with kinect

$
0
0

Hello,

I am very new in Processing. I am using Processing 3.3. So I found this formula and I want to move the animation by tracking my movements from a Kinect instead of a mouse. I am using the new Kinect on a Mac. I can get information from kinect2 but I can't find a way to connect my movements with the animation. Can pls someone help me?

void setup () {
  size (500,500);
  noFill();
  stroke(255);
  strokeWeight(2);
}




void draw() {
  background(0);



  translate(width /2, height/2);

  beginShape();

  // add some vertices



  for (float theta = 0; theta <= 2 * PI; theta += 0.01) {

    float rad = r(theta,
    mouseX / 100.0, // a
    mouseY / 100.0, // b
    70, // m
    1, // n1
    2, // n2
    2 // n3
    );
    float x = rad * cos (theta) * 50;
    float y = rad * sin (theta) * 50;
    vertex (x, y);


  }

  endShape();


}


float r(float theta, float a, float b, float m, float n1, float n2, float n3) {
  return pow(pow (abs(cos(m * theta / 4.0)/a), n2 ) +
    pow (abs(sin(m * theta / 4.0) /b), n3), -1.0 / n1) ;

}

Combine RGB data with Body Track to only show RGB of Bodies (KINECTV2/KINECTPV2 LIBRARY)

$
0
0

I am using Thomas Lengeling's KinectPV2 library to try to get processing to save/display only the RGB data of the bodies it detects.

Right now when it runs it displays the color image not filtered through body data, like this:

sketch_170326b 26-Mar-17 21_21_27

Regardless of whether or not a body is tracking in the space

When I want the finished projected product to look more like this:

goals

Here's vaguely what I've tried (by referencing the pixels in the body track, based on Thomas Lengeling's examples for depth to color & body track users:

//source code from Thomas Lengeling

import KinectPV2.*;

KinectPV2 kinect;

PImage body;

PImage bodyRGB;

int loc;

void setup() {
  size(512, 424, P3D);

  //bodyRGB = createImage(512, 424, PImage.RGB); //create empty image to hold color body pixels

  kinect = new KinectPV2(this);

  kinect.enableBodyTrackImg(true);
  kinect.enableColorImg(true);
  kinect.enableDepthMaskImg(true);

  kinect.init();
}

void draw() {
  background(255);

  body = kinect.getBodyTrackImage(); //put body data in variable

  bodyRGB = kinect.getColorImage(); //load rgb data into PImage

  //println(bodyRGB.width); 1920x1080 //println(bodyRGB.height);

  PImage cpy = bodyRGB.get();

  cpy.resize(width, height);

  //int [] colorRaw = kinect.getRawColor(); //get the raw data from depth and color

  //image(body,0,0); //display body

  loadPixels(); //load sketch pixels
  cpy.loadPixels();//load pixels to store rgb
  body.loadPixels(); //load body image pixels

  //create an x, y nested for loop for pixel location
  for (int x = 0; x < body.width; x++ ) {
    for (int y = 0; y < body.height; y++ ) {
      //pixel location
      loc = x + y * body.width;
      if (color(body.pixels[loc]) != 255) {
        color temp = color(cpy.pixels[loc]);
        pixels[loc] = temp;
      }
    }
  }

  //cpy.updatePixels(); //body.updatePixels();

  updatePixels();

  //image(cpy, 0, 0);
}

Create an Image Mask On Top of a Live Feed

$
0
0

Hey all!

So for this code I'm trying to create a code that will use a live webcam feed and face tracking to put an image on top of the tracked face. After this point, I want to press a key ('n' in the code I posted below) and have it switch to a different picture. Right now as a base code I have processing's "LiveFaceTracking" example in my code. Any help you guys could give me would be greatly appreciated!

    import gab.opencv.*;
    import processing.video.*;
    import java.awt.*;
    PImage WD;
    PImage GJ;

    Capture video;
    OpenCV opencv;

    void setup() {
      size(640, 480);
      video = new Capture(this, 640/2, 480/2);
      opencv = new OpenCV(this, 640/2, 480/2);
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
      video.start();

      loadImage("WD");
      loadImage("GJ");
    }

    void draw() {
      scale(2);
      opencv.loadImage(video);

      image(video, 0, 0 );

      noFill();
      stroke(0, 255, 0);
      strokeWeight(3);
      Rectangle[] faces = opencv.detect();
      println(faces.length);

      for (int i = 0; i < faces.length; i++) {
        println(faces[i].x + "," + faces[i].y);
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
      }
    }

    void captureEvent(Capture c) {
      c.read();
    }

    void keyPressed(){
      if(key = 'n'){
        loadImage = WD;

How to solve NoSuchMethodError

$
0
0

I get the following error when I try to compile my code. Please suggest a solution for the same.

java.lang.RuntimeException: java.lang.NoSuchMethodError: processing.core.PApplet.registerDispose(Ljava/lang/Object;)V at processing.opengl.PSurfaceJOGL$2.run(PSurfaceJOGL.java:484) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodError: processing.core.PApplet.registerDispose(Ljava/lang/Object;)V at SimpleOpenNI.SimpleOpenNI.initEnv(SimpleOpenNI.java:383) at SimpleOpenNI.SimpleOpenNI.(SimpleOpenNI.java:255) at bodyshape$Skeleton.kinectSetup(bodyshape.java:4965) at bodyshape.setup(bodyshape.java:298) at processing.core.PApplet.handleDraw(PApplet.java:2393) at processing.opengl.PSurfaceJOGL$DrawListener.display(PSurfaceJOGL.java:907) at jogamp.opengl.GLDrawableHelper.displayImpl(GLDrawableHelper.java:692) at jogamp.opengl.GLDrawableHelper.display(GLDrawableHelper.java:674) at jogamp.opengl.GLAutoDrawableBase$2.run(GLAutoDrawableBase.java:443) at jogamp.opengl.GLDrawableHelper.invokeGLImpl(GLDrawableHelper.java:1293) at jogamp.opengl.GLDrawableHelper.invokeGL(GLDrawableHelper.java:1147) at com.jogamp.newt.opengl.GLWindow.display(GLWindow.java:759) at com.jogamp.opengl.util.AWTAnimatorImpl.display(AWTAnimatorImpl.java:81) at com.jogamp.opengl.util.AnimatorBase.display(AnimatorBase.java:452) at com.jogamp.opengl.util.FPSAnimator$MainTask.run(FPSAnimator.java:178) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505)

Thanks in advance! :-bd


About to lose my mind - please help!

$
0
0

Hi,

Trying to use motion tracking for a sketch. I've seen all of the Shiffman videos like 5 times (he's great) and still not getting it. If someone could help me out I would really appreciate it. Using a Kinect 1 and pasting the code for the sketch I'm playing around with here.

final int nbWeeds = 125;
SeaWeed[] weeds;
PVector rootNoise = new PVector(random(123456), random(123456));
int mode = 1;
float radius = 750;
Boolean noiseOn = true;
PVector center;

void setup()
{
  size(1000, 1000, P2D);
  center = new PVector(width/2, height/2);
  strokeWeight(5);
  weeds = new SeaWeed[nbWeeds];
  for (int i = 0; i < nbWeeds; i++)
  {
    weeds[i] = new SeaWeed(i*TWO_PI/nbWeeds, 3*radius);
  }
}

void draw()
{
  //background(50);
  noStroke();
  fill(25, 15, 25);//, 50);
  rect(0, 0, width, height);
  rootNoise.add(new PVector(.05, .05));
  strokeWeight(1);
  for (int i = 0; i < nbWeeds; i++)
  {
    weeds[i].update();
  }
  stroke(200, 100, 100, 200);
  strokeWeight(4);
  noFill();
  ellipse(center.x, center.y, 2*radius, 2*radius);
}

void keyPressed()
{
  if (key == 'n')
  {
    noiseOn = !noiseOn;
  } else
  {
    mode = (mode + 1) % 2;
  }
}


class MyColor
{
  float R, G, B, Rspeed, Gspeed, Bspeed;
  final static float minSpeed = .2;
  final static float maxSpeed = 2.0;
  final static float minR = 200;
  final static float maxR = 255;
  final static float minG = 20;
  final static float maxG = 120;
  final static float minB = 100;
  final static float maxB = 140;

  MyColor()
  {
    init();
  }

  public void init()
  {
    R = random(minR, maxR);
    G = random(minG, maxG);
    B = random(minB, maxB);
    Rspeed = (random(1) > .5 ? 1 : -1) * random(minSpeed, maxSpeed);
    Gspeed = (random(1) > .5 ? 1 : -1) * random(minSpeed, maxSpeed);
    Bspeed = (random(1) > .5 ? 1 : -1) * random(minSpeed, maxSpeed);
  }

  public void update()
  {
    Rspeed = ((R += Rspeed) > maxR || (R < minR)) ? -Rspeed : Rspeed;
    Gspeed = ((G += Gspeed) > maxG || (G < minG)) ? -Gspeed : Gspeed;
    Bspeed = ((B += Bspeed) > maxB || (B < minB)) ? -Bspeed : Bspeed;
  }

  public color getColor()
  {
    return color(R, G, B);
  }
}
class SeaWeed
{
  final static float DIST_MAX = 5.5;//length of each segment
  final static float maxWidth = 50;//max width of the base line
  final static float minWidth = 11;//min width of the base line
  final static float FLOTATION = -3.5;//flotation constant
  float mouseDist;//mouse interaction distance
  int nbSegments;
  PVector[] pos;//position of each segment
  color[] cols;//colors array, one per segment
  float[] rad;
  MyColor myCol = new MyColor();
  float x, y;//origin of the weed
  float cosi, sinu;

  SeaWeed(float p_rad, float p_length)
  {
    nbSegments = (int)(p_length/DIST_MAX);
    pos = new PVector[nbSegments];
    cols = new color[nbSegments];
    rad = new float[nbSegments];
    cosi = cos(p_rad);
    sinu = sin(p_rad);
    x = width/2 + radius*cosi;
    y = height/2 + radius*sinu;
    mouseDist = 40;
    pos[0] = new PVector(x, y);
    for (int i = 1; i < nbSegments; i++)
    {
      pos[i] = new PVector(pos[i-1].x - DIST_MAX*cosi, pos[i-1].y - DIST_MAX*sinu);
      cols[i] = myCol.getColor();
      rad[i] = 3;
    }
  }

  void update()
  {
    PVector mouse = new PVector(mouseX, mouseY);

    pos[0] = new PVector(x, y);
    for (int i = 1; i < nbSegments; i++)
    {
      float n = noise(rootNoise.x + .002 * pos[i].x, rootNoise.y + .002 * pos[i].y);
      float noiseForce = (.5 - n) * 7;
      if (noiseOn)
      {
        pos[i].x += noiseForce;
        pos[i].y += noiseForce;
      }
      PVector pv = new PVector(cosi, sinu);
      pv.mult(map(i, 1, nbSegments, FLOTATION, .6*FLOTATION));
      pos[i].add(pv);

      //mouse interaction
      //if(pmouseX != mouseX || pmouseY != mouseY)
      {
        float d = PVector.dist(mouse, pos[i]);
        if (d < mouseDist)// && pmouseX != mouseX && abs(pmouseX - mouseX) < 12)
        {
          PVector tmpPV = mouse.get();
          tmpPV.sub(pos[i]);
          tmpPV.normalize();
          tmpPV.mult(mouseDist);
          tmpPV = PVector.sub(mouse, tmpPV);
          pos[i] = tmpPV.get();
        }
      }

      PVector tmp = PVector.sub(pos[i-1], pos[i]);
      tmp.normalize();
      tmp.mult(DIST_MAX);
      pos[i] = PVector.sub(pos[i-1], tmp);

      //keep the points inside the circle
      if (PVector.dist(center, pos[i]) > radius)
      {
        PVector tmpPV = pos[i].get();
        tmpPV.sub(center);
        tmpPV.normalize();
        tmpPV.mult(radius);
        tmpPV.add(center);
        pos[i] = tmpPV.get();
      }
    }

    updateColors();

    if (mode == 0)
    {
      stroke(0, 100);
    }
    beginShape();
    noFill();
    for (int i = 0; i < nbSegments; i++)
    {
      float r = rad[i];
      if (mode == 1)
      {
        stroke(cols[i]);
        vertex(pos[i].x, pos[i].y);
        //line(pos[i].x, pos[i].y, pos[i+1].x, pos[i+1].y);
      } else
      {
        fill(cols[i]);
        noStroke();
        ellipse(pos[i].x, pos[i].y, 2, 2);
      }
    }
    endShape();
  }

  void updateColors()
  {
    myCol.update();
    cols[0] = myCol.getColor();
    for (int i = nbSegments-1; i > 0; i--)
    {
      cols[i] = cols[i-1];
    }
  }
}

What am I missing? java.lang.UnsatisfiedLinkError

$
0
0

I see reference to this error in several places but I cant seem to find a thread anywhere that actually states the fix! any help would be wonderful for a sad n00b. Mac Pro desktop OS x 10.11.6 Processing 3.3 Kinect V2

java.lang.UnsatisfiedLinkError: no Kinect20.Face in java.library.path at processing.opengl.PSurfaceJOGL$2.run(PSurfaceJOGL.java:480) at java.lang.Thread.run(Thread.java:745) A library relies on native code that's not available. Or only works properly when the sketch is run as a 32-bit application.

Kinect PointCloudDepthOGL display delay

$
0
0

Hi everyone, I have been trying to achieve a 3-6 sec delay on the live display from my kinect 2's point cloud feed, I am new to all this, so I have been basing my work on the KinectPV2 library by Langeling, I also came across a post here (https://forum.processing.org/two/discussion/15800/how-to-delay-the-feed-of-the-kinect-2-point-cloud) where it is solved but i am having a hard time applying it to the solution I have in hands. this is the code i have in hands

import java.nio.*; import KinectPV2.*;

KinectPV2 kinect;

int Vloc, z = -200, VBOID; float scale = 260, pointScale = 100.0, a = 0.1;

int distanceMax = 4500, distanceMin = 0;

PGL pgl; PShader psh; ArrayList clouds; int limit = 120;

public void setup () { size(1280, 720, P3D); kinect = new KinectPV2(this); kinect.enableDepthImg(true); kinect.enablePointCloud(true); kinect.setLowThresholdPC(distanceMin); kinect.setHighThresholdPC(distanceMax);

kinect.init();

psh = loadShader("frag.glsl", "vert.glsl"); PGL pgl = beginPGL(); IntBuffer buffer = IntBuffer.allocate(1); pgl.genBuffers(1, buffer); VBOID = buffer.get(0); endPGL();

clouds = new ArrayList(); }

public void draw () { background(0); translate(width/2, height/2, z); scale(scale, -1*scale, scale); rotate(a, 0.0f, 1.0f, 0.0f); kinect.setLowThresholdPC(distanceMin); kinect.setHighThresholdPC(distanceMax);

FloatBuffer cloudBuf=kinect.getPointCloudPos();

pgl=beginPGL();
psh.bind();

Vloc=pgl.getAttribLocation(psh.glProgram, "vertex");
pgl.enableVertexAttribArray(Vloc);
int vData=kinect.WIDTHDepth * kinect.HEIGHTDepth *3;

{
 pgl.bindBuffer(PGL.ARRAY_BUFFER, VBOID);
 pgl.bufferData(PGL.ARRAY_BUFFER, Float.BYTES*vData,cloudBuf, PGL.DYNAMIC_DRAW);
 pgl.vertexAttribPointer(Vloc, 3, PGL.FLOAT, false, Float.BYTES*3,0);
}

pgl.bindBuffer(PGL.ARRAY_BUFFER, 0);

//draw point buffer as set of points //this is where the delay-draw mechanis should be? //or what it should encapsulate? pgl.drawArrays(PGL.POINTS, 0, vData);

pgl.disableVertexAttribArray(Vloc);

psh.unbind(); endPGL();

stroke(255,0,0);

}

OpenKinect

$
0
0

Hi all,

Using the Kinect V2 I'm trying to blur anything further than a threshold distance in the RGB image based on the depth image.

To do so I first need to align the Depth and RGB image, how can I do this?

Thanks, Charles

Kinect issue!

$
0
0

Hi!

Trying to run a Processing and Kinect version 1 interactive sketch. Running into errors. One error: cannot find anything named "num". This is in reference to my Processing sketch. Please help! Thank you!

import org.openkinect.freenect.*;
import org.openkinect.processing.*;

Kinect kinect;

// Depth image
PImage depthImg;

// Which pixels do we care about?
// These thresholds can also be found with a variaty of methods
float minDepth =  996;
float maxDepth = 2493;

// What is the kinect's angle
float angle;

void setup() {
  size(640, 480);
  colorMode(HSB, 255, 50, 50);
  frameRate(2.5);

  for (int i=0; i<num; i++) {
    col[i]= (int) random(255);
  }

  kinect = new Kinect(this);
  kinect.initDepth();
  angle = kinect.getTilt();

  // Blank image
  depthImg = new PImage(kinect.width, kinect.height);
}

void draw() {
  background(#FFFFFF);

  for (int i=0; i<num; i++) {
    float x = width/2 + random(-vari, vari);
    float y = height/2 + random(-vari, vari);
    stroke(col[i], 100, 100, 50);
    strokeWeight(width/5);
    noFill();
    sz = width/5;
    ellipse(x, y, sz, sz);
  }
  image(kinect.getDepthImage(), 0, 0);

  // Calibration
   //minDepth = map(mouseX,0,width, 0, 4500);
  //maxDepth = map(mouseY,0,height, 0, 4500);

  // Threshold the depth image
  int[] rawDepth = kinect.getRawDepth();
  for (int i=0; i < rawDepth.length; i++) {
    if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
      depthImg.pixels[i] = color(255);
    } else {
      depthImg.pixels[i] = color(0);
    }
  }

  // Draw the thresholded image
  depthImg.updatePixels();
  image(depthImg, kinect.width, 0);

  //Comment for Calibration
  fill(0);
  text("TILT: " + angle, 10, 20);
  text("THRESHOLD: [" + minDepth + ", " + maxDepth + "]", 10, 36);

  //Calibration Text
  //fill(255);
  //textSize(32);
  //text(minDepth + " " + maxDepth, 10, 64);
}

// Adjust the angle and the depth threshold min and max
void keyPressed() {
  if (key == CODED) {
    if (keyCode == UP) {
      angle++;
    } else if (keyCode == DOWN) {
      angle--;
    }
    angle = constrain(angle, 0, 30);
    kinect.setTilt(angle);
  } else if (key == 'a') {
    minDepth = constrain(minDepth+10, 0, maxDepth);
  } else if (key == 's') {
    minDepth = constrain(minDepth-10, 0, maxDepth);
  } else if (key == 'z') {
    maxDepth = constrain(maxDepth+10, minDepth, 2047);
  } else if (key =='x') {
    maxDepth = constrain(maxDepth-10, minDepth, 2047);
  }
}

please share knowledge on Kinect Drivers : KinectCamera (v1.6.0.476) vs libusbK (v3.0.7.0)

$
0
0

Dear Friends,

Consider me total ignorant on hardware-drivers working. To help me started, I would like a quuick guidance from you.

I recently got my hands on a kinect V1 #1473 running on processing 3.2.3 with Windows i7, GTX 1060. I deduced that - Kinect Drivers - KinectCamera (v1.6.0.476) vs libusbK (v3.0.7.0) gives two different options to work with skeleton data or not.

What is difference when I have different drivers - where to get started to understand what am I missing when I use one driver vs another.

Thanks a lot in advance Raj

Kinect Issue

$
0
0

Hi!

My sketch works but when I try to add the Kinect I can't get it to work. Ideally I'd like to project the piece and have people be able to play with it. Right now it's not possible. Not sure what I'm doing wrong. Please help! Thanks!

import org.openkinect.freenect.*;
import org.openkinect.processing.*;

Kinect kinect;
int f, num = 50, vari = 25;
float sz;
int col[] = new int[num];
boolean save;
// Depth image
PImage depthImg;

// Which pixels do we care about?
// These thresholds can also be found with a variaty of methods
float minDepth =  996;
float maxDepth = 2493;

// What is the kinect's angle
float angle;

void setup() {
  size(640, 480);
  colorMode(HSB, 255, 50, 50);
  frameRate(2.5);

  for (int i=0; i<num; i++) {
    col[i]= (int) random(255);
  }

  kinect = new Kinect(this);
  kinect.initDepth();
  angle = kinect.getTilt();

  // Blank image
  depthImg = new PImage(kinect.width, kinect.height);
}

void draw() {
  background(#FFFFFF);

  for (int i=0; i<num; i++) {
    float x = width/2 + random(-vari, vari);
    float y = height/2 + random(-vari, vari);
    pushMatrix();
    translate(x, y);
    stroke(col[i], 100, 100, 50);
    strokeWeight(width/5);
    noFill();
    sz = width/5;
    ellipse(x, y, sz, sz);
    popMatrix();
  }
  image(kinect.getDepthImage(), 0, 0);

  // Calibration
   //minDepth = map(mouseX,0,width, 0, 4500);
  //maxDepth = map(mouseY,0,height, 0, 4500);

  // Threshold the depth image
  int[] rawDepth = kinect.getRawDepth();
  for (int i=0; i < rawDepth.length; i++) {
    if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
      depthImg.pixels[i] = color(255);
    } else {
      depthImg.pixels[i] = color(0);
    }
  }

  // Draw the thresholded image
  depthImg.updatePixels();
  image(depthImg, kinect.width, 0);

  //Comment for Calibration
  fill(0);
  text("TILT: " + angle, 10, 20);
  text("THRESHOLD: [" + minDepth + ", " + maxDepth + "]", 10, 36);

  //Calibration Text
  //fill(255);
  //textSize(32);
  //text(minDepth + " " + maxDepth, 10, 64);
}

// Adjust the angle and the depth threshold min and max
void keyPressed() {
  if (key == CODED) {
    if (keyCode == UP) {
      angle++;
    } else if (keyCode == DOWN) {
      angle--;
    }
    angle = constrain(angle, 0, 30);
    kinect.setTilt(angle);
  } else if (key == 'a') {
    minDepth = constrain(minDepth+10, 0, maxDepth);
  } else if (key == 's') {
    minDepth = constrain(minDepth-10, 0, maxDepth);
  } else if (key == 'z') {
    maxDepth = constrain(maxDepth+10, minDepth, 2047);
  } else if (key =='x') {
    maxDepth = constrain(maxDepth-10, minDepth, 2047);
  }
}

CAN Kinect Physics Code Examples

$
0
0

I've been getting some notifications that the code examples for the CAN Kinect Physics tutorial no longer work. This is because the code formatting plugin was removed and because the code is severely outdated. For future generations, I will post the code examples below. This code is provided as is, since I've stopped supporting this long ago (as I haven't really used the Kinect since writing the tutorial). Perhaps those still interesting in this code, can gather here and if needed, post updated versions of these code example, that run in more recent versions of Processing and relevant libraries. I still get mails about this tutorial regularly and I will be pointing everyone to this thread. Good luck & happy, creative coding! :)

EDIT 30.05.2014

It seems the forum also has problems correctly displaying the code or something else went wrong. Either way I am providing a download link to the original three code examples (file size: 16 KB). Once again, I can and will no longer provide any support whatsoever on these code examples as I've stopped using the Kinect two years ago. Of course feel free to share updated code examples via this thread.

LINK TO A ZIP-FILE CONTAINING THE ORIGINAL CODE EXAMPLES:

https://dl.dropboxusercontent.com/u/94122292/CANKinectPhysics.zip

anyone know anything about facial landmark detection?

$
0
0

I want to be able to make a code using my live camera, and detectiong facial features so i can expand on that and morph or manipulate facial features, anyone know anything about that? do i need to download a library?

Load textfile into array to display depth image

$
0
0

Hey guys,

after watching some tutorials about the kinect I finally managed to get the depth image and transform that depth image into text/numbers. I have a textfile with a poem, just one for now, and I want that it picks random words to create something new. In my code I loaded the file, but instead of words it gives me numbers. Does anyone know what I do wrong?

import org.openkinect.freenect.*;
import org.openkinect.processing.*;

String[] lines;
String[] tokens;
String[] allwords;


PImage depthImg;
int minDepth =  400;
int maxDepth = 900;
int kinectWidth = 640;
int kinectHeight = 480;
//max depth 2048
int cont_length = displayWidth*displayHeight;
float angle;
float reScale;

Kinect kinect;


void setup() {
  lines = loadStrings("rainbow.txt");
  String allwords = join(lines, "\n");
  tokens = splitTokens(allwords, ",; .?1234567890!");
  println(tokens);
  reScale = (float) width / kinectWidth;

  size(1280, 800, P3D);

  PFont f = createFont( "Franklin Gothic Medium", 24 );
  textFont(f);

  kinect = new Kinect(this);
  kinect.initDepth();
  angle = kinect.getTilt();
  depthImg = new PImage(640, 480, ARGB);
  depthImg.filter(BLUR, 1);
  reScale = (float) width / kinectWidth;
}



void draw() {
  background(0);
  int[] rawDepth = kinect.getRawDepth();
  float minThresh = map(mouseX, 0, width, 0, 4500);
  float maxThresh = map(mouseY, 0, width, 0, 4500);


  for (int x=0; x < kinect.width; x+=10) {
    for (int y=0; y < kinect.height; y+=10) {
      int offset = x + y * kinect.width;
      int d = rawDepth[offset];
      float b = brightness(depthImg.pixels[offset]);
      float z = map(b, 0, 255, 250, -250);





      if (d >= minDepth && d <= maxDepth) {


        fill(0, 255, 0);
        pushMatrix();
        textSize(10);
        translate(x, y, z);
        text(int(random(tokens.length)), x, y );

        popMatrix();
      }
    }
  }
  text(minThresh + " " + maxThresh, 10, 64);
  translate(0, (height-kinectHeight*reScale)/2);
  scale(reScale);
  image(depthImg, 0, 0, displayWidth, displayHeight);
  depthImg.updatePixels();




  fill(0);
  text("TILT: " + angle, 10, 20);
}

void mousePressed() {
  saveFrame();
}
void keyPressed() {
  if (key == CODED) {
    if (keyCode == UP) {
      angle++;
    } else if (keyCode == DOWN) {
      angle--;
    }
    angle = constrain(angle, 0, 30);
    kinect.setTilt(angle);
  }
}

Speeding up Open CV detect face

$
0
0

Hi All,

I'm trying to implement a real time face detection using openCV from the IR image of the kinect, unfortunately it takes my sketch from 60fps down to 6fps. I am aware kinectPV2 does face detection but it's nowhere near as good as openCV. Can someone suggest a solution? I've tried the multithreaded "your are einstein" sketch but I couldn't get it to run.

import KinectPV2.*;
import gab.opencv.*;
import java.awt.Rectangle;

KinectPV2 kinect;

FaceData [] faceData;

OpenCV opencv;
Rectangle[] faces;

PImage img;

void setup() {
  size(1000, 500, P2D);

  opencv = new OpenCV(this, 512, 424);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

  kinect = new KinectPV2(this);

  //for face detection base on the infrared Img
  kinect.enableInfraredImg(true);

  //enable face detection
  kinect.enableFaceDetection(true);

  kinect.enableDepthImg(true);

  kinect.init();
}

void draw() {
  background(0);
  img = kinect.getInfraredImage(); //512 424


  opencv.loadImage(img);
  faces = opencv.detect();



  image(img, 0, 0);
  image(kinect.getDepthImage(), img.width, 0);

  fill(255);
  text("frameRate "+frameRate, 50, 50);

  noFill();
  for (int i = 0; i < faces.length; i++) {
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }
}

Get Depth Value for Each Pixel

$
0
0

Hi All,

I'd really appreciate some help on this, I'm using the KinectPV2 library which is great but very poorly documented.

With reference to the example "MapDepthToColor" copied below I'm trying to retrieve the depth for each RGB pixel. I'm having a hard time decrypting what's going on, can someone help me out?

Thanks in advance, Charles

/*
Thomas Sanchez Lengeling.
http://codigogenerativo.com/

KinectPV2, Kinect for Windows v2 library for processing

Color to fepth Example,
Color Frame is aligned to the depth frame
*/

import KinectPV2.*;

KinectPV2 kinect;

int [] depthZero;

//BUFFER ARRAY TO CLEAN DE PIXLES
PImage depthToColorImg;

void setup() {
  size(1024, 848, P3D);

  depthToColorImg = createImage(512, 424, PImage.RGB);
  depthZero    = new int[ KinectPV2.WIDTHDepth * KinectPV2.HEIGHTDepth];

  //SET THE ARRAY TO 0s
  for (int i = 0; i < KinectPV2.WIDTHDepth; i++) {
    for (int j = 0; j < KinectPV2.HEIGHTDepth; j++) {
      depthZero[424*i + j] = 0;
    }
  }

  kinect = new KinectPV2(this);
  kinect.enableDepthImg(true);
  kinect.enableColorImg(true);
  kinect.enablePointCloud(true);

  kinect.init();
}

void draw() {
  background(0);

  float [] mapDCT = kinect.getMapDepthToColor(); // Length: 434,176

  //get the raw data from depth and color
  int [] colorRaw = kinect.getRawColor(); // Length: 2,073,600

  //clean de pixels
  PApplet.arrayCopy(depthZero, depthToColorImg.pixels);

  int count = 0;
  depthToColorImg.loadPixels();
  for (int i = 0; i < KinectPV2.WIDTHDepth; i++) {
    for (int j = 0; j < KinectPV2.HEIGHTDepth; j++) {

      //incoming pixels 512 x 424 with position in 1920 x 1080
      float valX = mapDCT[count * 2 + 0];
      float valY = mapDCT[count * 2 + 1];

      //maps the pixels to 512 x 424, not necessary but looks better
      int valXDepth = (int)((valX/1920.0) * 512.0);
      int valYDepth = (int)((valY/1080.0) * 424.0);

      int  valXColor = (int)(valX);
      int  valYColor = (int)(valY);

      if ( valXDepth >= 0 && valXDepth < 512 && valYDepth >= 0 && valYDepth < 424 &&
        valXColor >= 0 && valXColor < 1920 && valYColor >= 0 && valYColor < 1080) {
        color colorPixel = colorRaw[valYColor * 1920 + valXColor];
        //color colorPixel = depthRaw[valYDepth*512 + valXDepth];
        depthToColorImg.pixels[valYDepth * 512 + valXDepth] = colorPixel;
      }
      count++;
    }
  }
  depthToColorImg.updatePixels();

  image(depthToColorImg, 0, 424);
  image(kinect.getColorImage(), 0, 0, 512, 424);
  image(kinect.getDepthImage(), 512, 0);

  text("fps: "+frameRate, 50, 50);
}
Viewing all 530 articles
Browse latest View live