Quantcast
Channel: Kinect - Processing 2.x and 3.x Forum
Viewing all 530 articles
Browse latest View live

openKinect + Fisica. movement

$
0
0

Hello. Before I ask nothing, i mean 2 things. 1. My native language is Spanish and 2. I'm not a professional programmer, I'm learning. So please excuse my car for all the gramatical errors and programming errors or the messy code.

Ok, I want to make some "organisms", that move alone, and then when someone is closer to the kinect, can modify the movement of these "organisms" For this, I been using fisica library and openKinect library, but I can't move them with the data taken from the kinect.

this is my code

Thanks for any help.


/*BASADO EN TUTORIAL DANIERL SHIFFMAN */

import org.openkinect.processing.*;

Kinect kinect; PImage img; float minThresh = 485; float maxThresh = 770;

import fisica.*; FWorld world;

int cantidad = 10; ArrayList noctilucas;

FBox box;

void setup(){

size(640, 520, P3D); kinect = new Kinect(this); kinect.initDepth(); img = createImage(kinect.width, kinect.height, RGB);

//---------MUNDO... Fisica.init(this); world = new FWorld(); world.setGravity(0, 0); world.setEdges();

box = new FBox(60,60);

noctilucas = new ArrayList ();

for (int i = 0; i < cantidad; i ++) { Noctilucas n = new Noctilucas(); world.add(n); noctilucas.add(n); n.crea(32, 233, 245); }

}

void draw(){

background(0);

img.loadPixels();

/calibración de profundidad la kinect versiión 1 o 1414 tiene los valores entre cero y.... para modificar las variables min y maxThresh/ // minThresh = map(mouseX, 0, width, 0, 4500); //maxThresh = map(mouseY, 0, height, 0, 4500); //println(minThresh, maxThresh);

int[] depth = kinect.getRawDepth();

/cantidad de pixeles en X, cantidad de pixeles en Y suma total de todos los pixeles en X y Y/

float sumX = 0; float sumY = 0; float totalPixels = 0;

//buscar la profundidad de cada pixel

for (int x = 0; x < kinect.width; x++){ for(int y = 0; y < kinect.height; y++){

  int offset = x + y * kinect.width;
  float d = depth[offset];

  //espacio minimo y máximo. dentro del que
  //debe estar el interactor.

  if(d > minThresh && d < maxThresh && x > 105){

  /*pixeles que están entre el minimo
  y máximo threshold (profundidad) violeta*/

  img.pixels[offset] = color(255, 0 , 150);

  /*sumando a cada pixel un pixel de los
  que están dentro de la profundidad elegida
  en X y Y*/

  sumX += x;
  sumY += y;
  totalPixels++;

}else{

    img.pixels[offset] = color(0);
  }
}

}

img.updatePixels(); image(img, 0, 0);

/avg es la variable para el promedio/

float avgX = sumX / totalPixels; float avgY = sumY / totalPixels; fill(150, 0, 255); //ellipse(avgX, avgY, 64, 64);

for (int i = noctilucas.size()-1; i>= 0; i --) { Noctilucas n = noctilucas.get(i); n.movimiento(); n.kinectDepth(avgX, avgY); }

world.step(); world.draw(); }

//-------class Noctilucas------- class Noctilucas extends FBlob {

float s = random(10, 20 ); float x, y;

Noctilucas() {

super();

}

void crea(color r, color g, color b) {

x = width/2-200 + random(-5, 5);
y = height/2-10 + random(-5, 5);

setAsCircle(random(x - 10), random(y - 10), s, 10);
setStroke(r, g, b);
setStrokeWeight(1);
//setNoStroke();
setFill(94, 133, 157, 70);
setGrabbable(false);
setName("hijas");

}

void movimiento() {

if (frameCount % 2 == 0)
{
  x = random(-70, 70);
  y = random(-30, 30);


  addForce(x, y);
}

}

void kinectDepth(float mX, float mY) {

setPosition(mX, mY);

} }


Low frame because GetImage

$
0
0

i use Kinect4WinSDK libraries. my sketch have low frame when i use GetImage. but if i delete GetImage, return normal frame. How do i go for the normal frame while using the GetImage ?? help me plz : )

The class "PApplet" does not exist amongst other things.....

$
0
0

Hello,

New to the forum and to Processing so I need a lot of help. I'm trying to follow along with Daniel Shiffman tutorial for setting up the kinect to your mac.

image

I have downloaded and installed the Open Kinect Processing drivers. I have my kinect (version (1) 14-14) connected and I'm not getting any "no kinect" errors. I have followed his text exactly and I can't get anything to work. When I hit Command-R I don't even get a window all I get is my console filling up with "at java.awt.EventDispatchThread.run(EventDispatch.java:82) Like you can see below.

Have I missed a step? PLEASE HELP!

Screen Shot 2016-08-29 at 7.10.02 PM

Screen Shot 2016-08-29 at 7.09.48 PM

Can I extract the point data I get from the Kinect to be used in after effects?

$
0
0

Looking to somehow export the data the Kinect is using into a format that I can use in After Effects. Is there anyway to use the point data to export an obj sequence?

getting started w/kinect

$
0
0

what is the newest (or best) version of kinect to purchase for working with processing on a mac? a lot of the reference info i'm finding online seems dated or vague. anyone have a "getting started" checklist they recommend, or know if this shiffman article is still accurate?

Spout and videoExporter Enabled Kinect Masker

$
0
0

Over the last two weeks I have gone from not knowing much about processing to having a final product, thank you to @GoToLoop, @hamoid and others. Your sketches and contributions to this forum are invaluable!

I give you Body Mapper !!

Attached is a sketch that interfaces with a KinectV1 - The depth image is used to create a mask overlay. User videos are used as textures for projection mapping, or more specifically body mapping. The sketch looks for .mp4 and .mov files in your data directory and allows you to cycle forward and backwards through these videos. When you are ready, you can either export the video via the onboard videoExporter to .mp4 (saved in a directory that you need to create called "savedVideo"), or you can share the frames via Spout (i really wish there was a spout recorder, similar to syphon recorder and there is, cause i made one using Max MSP).

Let me know what you think! I'm sure I have made some strange code, but then again I dont really know what I'm doing and this is very much a learning experience for me.

Enjoy!

N

//    BODY MAPPER

//Cobbled together by Nicolas de Cosson 2016


//
//            SpoutSender
//
//      Send to a Spout receiver
//
//           spout.zeal.co
//
//       http://spout.zeal.co/download-spout/
//
/**
 * Movie Player (v1.21)
 * by GoToLoop  (2014/Oct/31)
 *
 * forum.processing.org/two/discussion/7852/
 * problem-with-toggling-between-multiple-videos-on-processing-2-2-1
 */
/*
  This sketch shows how you can record different takes.
 */

import com.hamoid.*;
import processing.video.Movie;
import spout.*;
import org.openkinect.freenect.*;
import org.openkinect.processing.*;
import org.gstreamer.elements.PlayBin2;
import java.io.FilenameFilter;

static final PlayBin2.ABOUT_TO_FINISH FINISHING = new PlayBin2.ABOUT_TO_FINISH() {
  @ Override public void aboutToFinish(PlayBin2 elt) {
  }
};

//usefull so that we do not overwrite movie files in save directory
int ye = year();
int mo = month();
int da = day();
int ho = hour();
int mi = minute();
int se = second();
//global frames per second
static final float FPS = 30.0;
//index
int idx;
//string array for films located in data directory
String[] FILMS;
//string for weather or not we are exporting to .mp4 using ffmpeg
String record;
boolean isPaused;
boolean recording = false;
// Depth image
PImage depthImg;
// Which pixels do we care about?
int minDepth =  60;
int maxDepth = 800;
//max depth 2048

//declare a kinect object
Kinect kinect;
//declare videoExport
VideoExport videoExport;
//movie array
Movie[] movies;
//movie
Movie m;
// DECLARE A SPOUT OBJECT
Spout spout;


void setup() {
  //I have to call resize becasue for some reason P2D does not
  //seem to to actually size to display width/height on first call
  size(displayWidth, displayHeight, P2D);
  surface.setResizable(true);
  surface.setSize(displayWidth, displayHeight);
  surface.setLocation(0, 0);
  noSmooth();
  frameRate(FPS);
  background(0);

  kinect = new Kinect(this);
  kinect.initDepth();

  // Blank image with alpha channel
  depthImg = new PImage(kinect.width, kinect.height, ARGB);

  // CREATE A NEW SPOUT OBJECT
  spout = new Spout(this);
  //CREATE A NAMED SENDER
  spout.createSender("BodyMapper Spout");

  println("Press R to toggle recording");
  //.mp4 is created with year month date hour minute and second data so we never save over a video
  videoExport = new VideoExport(this, "savedVideo/Video" + ye + mo + da + ho + mi + se + ".mp4");

  videoExport.setFrameRate(15);

  //videoExport.forgetFfmpegPath();
  //videoExport.dontSaveDebugInfo();

  java.io.File folder = new java.io.File(dataPath(""));

  // this is the filter (returns true if file's extension is .mov or .mp4)
  java.io.FilenameFilter movFilter = new java.io.FilenameFilter() {
    String[] exts = {
      ".mov", ".mp4"
    };
    public boolean accept(File dir, String name) {
      name = name.toLowerCase();
      for (String ext : exts) if (name.endsWith(ext)) return true;
      return false;
    }
  };
  //create an array of strings comprised of .mov/.mp4 in data directory
  FILMS = folder.list(movFilter);
  //using the number of videos in data directory we can create array of videos
  movies = new Movie[FILMS.length];

  for (String s : FILMS)  (movies[idx++] = new Movie(this, s))
    .playbin.connect(FINISHING);
  //start us off by playing the first movie in the array
  (m = movies[idx = 0]).loop();
}

void draw() {
  // Threshold the depth image
  int[] rawDepth = kinect.getRawDepth();
  for (int i=0; i < rawDepth.length; i++) {
    if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
      //if pixels are in range then turn them to alpha transparency
      depthImg.pixels[i] = color(0, 0);
    } else {
      //otherwise turn them black
      depthImg.pixels[i] = color(0);
    }
  }
  //update pixels from depth map to reflect change of pixel colour
  depthImg.updatePixels();
  //blur the edges of depth map
  depthImg.filter(BLUR, 1);


  //draw movie to size of current display
  image(m, 0, 0, displayWidth, displayHeight);
  //draw depth map mask to size of current display
  image(depthImg, 0, 0, displayWidth, displayHeight);
  //share image through Spout
  // Sends at the size of the window
  spout.sendTexture();
  //if key r is pressed begin export of .mp4 to save directory
  if (recording) {
    videoExport.saveFrame();
  }
  //TODO - create second window for preferences and instructions
  fill(255);
  text("Recording is " + (recording ? "ON" : "OFF"), 30, 100);
  text("Press r to toggle recording ON/OFF", 30, 60);
  text("Video saved to file after application is closed", 30, 80);
}

void movieEvent(Movie m) {
  m.read();
}

void keyPressed() {
  int k = keyCode;
  if (k == RIGHT) {
    // Cycle forwards
    if (idx >= movies.length - 1) {
      idx = 0;
    } else {
      idx += 1;
    }
  } else if (k == LEFT) {
    // Cycle backwards
    if (idx <= 0) {
      idx = movies.length - 1;
    } else {
      idx -= 1;
    }
  }

  if (k == LEFT || k == RIGHT) {
    m.stop();
    (m = movies[idx]).loop();
    isPaused = false;
    background(0);
  }

  if (key == 'r' || key == 'R') {
    recording = !recording;
    println("Recording is " + (recording ? "ON" : "OFF"));
  }
}

@ Override public void exit() {
  for (Movie m : movies)  m.stop();
  super.exit();
}

kinect v1 with w10 64bit

$
0
0

I am trying to follow shiffmans example of kinect use with P3. I have loaded the Kinect4WinSDK library with Contribution Manager, trying to load Kinect v2 shows an error on install. I run Processing 3.2.1 in p5.js mode I can run the simple ellipse example but can't figure out what I need to import in order to use my kinect v1.

How to change position of "Contour" OpenCV

$
0
0

Hi everybody, I'm using the opencv findcontour library that I modified to use with the webcam, that works great, but I would the webcam be in the center of my window. When I change the values of : image(video, x, y); it only move the video of the webcam, and not the contours. It stay on the top left. Here my code :

Thanks for your help :)

////////////////////////////////////////////
////////////////////////////////// LIBRARIES
////////////////////////////////////////////

import processing.serial.*;
import gab.opencv.*;
import processing.video.*;




/////////////////////////////////////////////////
////////////////////////////////// INITIALIZATION
/////////////////////////////////////////////////

Movie mymovie;
Capture video;
OpenCV opencv;
Contour contour;




////////////////////////////////////////////
////////////////////////////////// VARIABLES
////////////////////////////////////////////

int lf = 10;    // Linefeed in ASCII
String myString = null;
Serial myPort;  // The serial port
int sensorValue = 0;
int x = 300;




/////////////////////////////////////////////
////////////////////////////////// VOID SETUP
/////////////////////////////////////////////


void setup() {
  size(1280, 1024);
  // List all the available serial ports
  printArray(Serial.list());
  // Open the port you are using at the rate you want:
  myPort = new Serial(this, Serial.list()[1], 9600);
  myPort.clear();
  // Throw out the first reading, in case we started reading
  // in the middle of a string from the sender.
  myString = myPort.readStringUntil(lf);
  myString = null;
  opencv = new OpenCV(this, 720, 480);
  video = new Capture(this, 720, 480);
  mymovie = new Movie(this, "visage.mov");
  opencv.startBackgroundSubtraction(5, 3, 0.5);
  mymovie.loop();
}




////////////////////////////////////////////
////////////////////////////////// VOID DRAW
////////////////////////////////////////////


void draw() {
  image(mymovie, 0, 0);
  image(video, 20, 20);
  //tint(150, 20);
  noFill();
  stroke(255, 0, 0);
  strokeWeight(1);



  // check if there is something new on the serial port
  while (myPort.available() > 0) {
    // store the data in myString
    myString = myPort.readStringUntil(lf);
    // check if we really have something
    if (myString != null) {
      myString = myString.trim(); // let's remove whitespace characters
      // if we have at least one character...
      if (myString.length() > 0) {
        println(myString); // print out the data we just received
        // if we received a number (e.g. 123) store it in sensorValue, we sill use this to change the background color.
        try {
          sensorValue = Integer.parseInt(myString);
        }
        catch(Exception e) {
        }
      }
    }
  }
  if (x < sensorValue) {
    video.start();
    opencv.loadImage(video);

  }

  if (x > sensorValue) {
    image(mymovie, 0, 0);
  }

  opencv.updateBackground();
  opencv.dilate();
  opencv.erode();

  for (Contour contour : opencv.findContours()) {
    contour.draw();
  }

}




//////////////////////////////////////////////
////////////////////////////////// VOID CUSTOM
//////////////////////////////////////////////


void captureEvent(Capture video) {
  video.read(); // affiche l'image de la webcam
}

void movieEvent(Movie myMovie) {
  myMovie.read();
}

Virtual interactive Keyboard Kinect and Processing 3

$
0
0

I am working on a fun project of mine, which is creating a virtual keyboard using Kinect1 and Processing 3, and it works with reading the users steps or touches when they touch or step on the area where the keyboard is projected on. For example, let's say we have a keyboard shape on a wall or a floor and when you press "a" the letter "a" will be written on the canvas, and there will be a sound for the response. So, my questions are how do I make the Kinect recognize that spot of the letter, and how to make the Kinect press that key ( a ) when it reaches that spot? Does anyone have any guidance, recommendation, or any pieces of advice?

How to edit body track image separately?

$
0
0

Using: -Kinect V2 -Windows10 -Processing 3.2.1 -Kinect PV2

MaskTest Code: void draw() { background(0);

image(kinect.getDepthImage(), 0, 0); image(kinect.getBodyTrackImage(), 512, 0);

int [] rawData = kinect.getRawBodyTrack();

foundUsers = false; for(int i = 0; i < rawData.length; i+=5){ if(rawData[i] != 255){ //found something foundUsers = true; break; }

}

Question: I'm trying to edit body track image separately. What I want to do is mask head big separately. But I could not edit separate parts of body size in body track image code. How can I edit the size of each part?

CAN Kinect Physics Code Examples

$
0
0

I've been getting some notifications that the code examples for the CAN Kinect Physics tutorial no longer work. This is because the code formatting plugin was removed and because the code is severely outdated. For future generations, I will post the code examples below. This code is provided as is, since I've stopped supporting this long ago (as I haven't really used the Kinect since writing the tutorial). Perhaps those still interesting in this code, can gather here and if needed, post updated versions of these code example, that run in more recent versions of Processing and relevant libraries. I still get mails about this tutorial regularly and I will be pointing everyone to this thread. Good luck & happy, creative coding! :)

EDIT 30.05.2014

It seems the forum also has problems correctly displaying the code or something else went wrong. Either way I am providing a download link to the original three code examples (file size: 16 KB). Once again, I can and will no longer provide any support whatsoever on these code examples as I've stopped using the Kinect two years ago. Of course feel free to share updated code examples via this thread.

LINK TO A ZIP-FILE CONTAINING THE ORIGINAL CODE EXAMPLES:

https://dl.dropboxusercontent.com/u/94122292/CANKinectPhysics.zip

OpenCV with OpenKinect and Xbox Kinect V2

$
0
0

hi everyone,

i am trying build a face detection with the opencv and openkinect libraries. for the image input i want to use the xbox kinect v2. i am basing my code on the face detection example.

this is my code so far:

import gab.opencv.*;
import java.awt.Rectangle;

/* KINECT */
import org.openkinect.freenect.*;
import org.openkinect.freenect2.*;
import org.openkinect.processing.*;

OpenCV opencv;
Kinect2 kinect2;

Rectangle[] faces;

void setup() {
  opencv = new OpenCV(this, 640/2, 480/2);
  size(640, 480);
  // Kinectv2
  kinect2 = new Kinect2(this);
  kinect2.initVideo();
  kinect2.initDevice();

  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
  faces = opencv.detect();
}

void draw() {
  opencv.loadImage(kinect2.getVideoImage());
  image(kinect2.getVideoImage(), 0, 0, 640, 480);

  noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  for (int i = 0; i < faces.length; i++) {
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }
}

the problem seems to be in the line "opencv.loadImage(kinect2.getVideoImage());" since the detection does not work. when working with the isight camera (using the build-in function "capture" and "video"-add-on) instead of kinect everything works perfectly fine.

can anyone help?

Kinect4WinExample

$
0
0

Trying to run this example (not modified) I noticed that my bodies.size() value is always 0. I can see myself in the image and depht picture and expected to see the recognized bones. Can not see what conditions will add a "body" to the bodies ArrayList.

depth value puzzle

$
0
0

I try to capture the depth data with

PImage depth = kinect.GetDepth();

For a certain x,y pixel I try to get the depth value with:

depth.loadPixels();
println(depth.pixels[x + y*width]);

but get funny high negative values.

What' s the trick to get reasonable numbers?

Kinect,RotateY,PopMatrix,PushMatrix

$
0
0

hello guys,here i have 3 examples of the same code,the two of them is with kinect and the other with a static point and the mouse.The final installation will be with the two hands of the kinect.I have one problem with the kinect and one with pushMatrix popMatrix and rotate().I will explain in the code below.What are the problems and the what i want to achieve

First example:here the examples are not rotating(the cubes,the models and the area that influence the models)It works perfect except that for the model to be shown,it has to calibrate first with the kinect because the if statements.

import peasy.*;
import saito.objloader.*;
import SimpleOpenNI.*;
import spout.*;

//PrintWriter output;
OBJModel model ;
OBJModel Smodel ;
OBJModel tmpmodel ;

Spout spout;

PeasyCam cam;

SimpleOpenNI kinect;


float z=0;
float easing = 0.005;
float r;
float k;
int VertCount;
PVector[] Verts;
PVector[] locas;
PVector Mouse;
PVector Mouse2;

void setup()
{
  size(640*3, 480*3, P3D);
  frameRate(30);
  noStroke();

  kinect = new SimpleOpenNI(this);

  kinect.enableDepth();
  kinect.enableUser();

  model = new OBJModel(this, "Model2.obj", "absolute", TRIANGLES);
  model.enableDebug();
  model.scale(200);
  model.translateToCenter();

  Smodel = new OBJModel(this, "Model2.obj", "absolute", TRIANGLES);
  Smodel.enableDebug();
  Smodel.scale(200);
  Smodel.translateToCenter();

  tmpmodel = new OBJModel(this, "Model2.obj", "absolute", TRIANGLES);
  tmpmodel.enableDebug();
  tmpmodel.scale(200);
  tmpmodel.translateToCenter();

  //output = createWriter("positions.txt");

  cam = new PeasyCam(this, width/2, height/2, 0, 2300);

  spout = new Spout(this);

  spout.createSender("Self kinect");
}


void draw()
{
  background(0);
  lights();

  kinect.update();
  IntVector userList = new IntVector();
  kinect.getUsers(userList);

  int VertCount = model.getVertexCount ();
  Verts = new PVector[VertCount];
  locas = new PVector[VertCount];
  r =300;
  //k = k + 0.01;

  cam.setMouseControlled(false);


  if (userList.size() > 0) {

    textSize(300);
    text("Start1", width/2-150, height/2-150);
    fill(0, 102, 153);

    int userId = userList.get(0);

    if ( kinect.isTrackingSkeleton(userId)) {

      textSize(300);
      text("Start2", width/2-150, height/2-150);
      fill(0, 102, 153);

      PVector rightHand = new PVector();
      kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, rightHand);

      PVector convertedRightHand = new PVector();
      kinect.convertRealWorldToProjective(rightHand, convertedRightHand);

      PVector leftHand = new PVector();
      kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, leftHand);

      PVector convertedLeftHand = new PVector();
      kinect.convertRealWorldToProjective(leftHand, convertedLeftHand);


      PVector rightShoulder = new PVector();
      kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, rightShoulder);

      PVector convertedRightShoulder = new PVector();
      kinect.convertRealWorldToProjective(rightShoulder, convertedRightShoulder);


      PVector leftShoulder = new PVector();
      kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, leftShoulder);

      PVector convertedleftShoulder = new PVector();
      kinect.convertRealWorldToProjective(leftShoulder, convertedleftShoulder);


      //output.println(" This is the firstPose "+"rightHand "+rightHand+" leftHand "+leftHand);



      float rightHandZ =  map(rightHand.z, 5500, 7500, 1100, 1500);
      float ConrightHandZ = map(rightHandZ, 1100, 1500, 0, 1440);

      float leftHandZ =map(leftHand.z, 5500, 7500, 1100, 1500);
      float ConleftHandZ = map(leftHandZ, 1100, 1500, 0, 1440);

      Mouse = new PVector(rightHand.x, -rightHand.y, z);
      Mouse2 = new PVector(leftHand.x, -leftHand.y, z);


      println("rightHand "+rightHand.z+"leftHand "+leftHand.z);
      //println(frameCount);

      pushMatrix();

      translate(width/2, height/2, 0);
      rotateY(k);


      for (int i = 0; i < VertCount; i++) {
        //PVector orgv = model.getVertex(i);

        locas[i]= model.getVertex(i);
        Verts[i]= Smodel.getVertex(i);


        //PVector tmpv = new PVector();


        if (frameCount> 100) {



          float randX = noise(randomGaussian());
          float randY = noise(randomGaussian());
          float randZ = noise(randomGaussian());

          PVector Ran = new PVector(randX, randY, randZ);

          //float norX = abs(cos(k)) * randX;
          //float norY = abs(cos(k)) * randY;
          //float norZ = abs(cos(k)) * randZ;

          if ((Verts[i].y > Mouse.y  - r/2 && Verts[i].y < Mouse.y  + r/2 && Verts[i].x > Mouse.x  - r/2 && Verts[i].x < Mouse.x  + r/2 && Verts[i].z > Mouse.z  - 1920/2 && Verts[i].z <  Mouse.z  + 1920/2)||(Verts[i].y > Mouse2.y  - r/2 && Verts[i].y < Mouse2.y  + r/2 && Verts[i].x > Mouse2.x  - r/2 && Verts[i].x < Mouse2.x  + r/2 && Verts[i].z > Mouse2.z  - 1920/2 && Verts[i].z <  Mouse2.z  + 1920/2)) {
            tmpmodel.setVertex(i, locas[i].x, locas[i].y, locas[i].z);
          } else {

            Verts[i].x+=Ran.x;
            Verts[i].y+=Ran.y;
            Verts[i].z+=Ran.z;

            if (Verts[i].x > width/2 ) {
              Verts[i].x=-width/2;
            } else if (Verts[i].x < -width/2) {
              Verts[i].x=width/2;
            }
            if (Verts[i].y > height/2 ) {
              Verts[i].y=-height/2;
            } else if (Verts[i].y < -height/2) {
              Verts[i].y=height/2;
            }

            if (Verts[i].z < -720 ) {
              Verts[i].z=800/2;
            } else if ( Verts[i].z > 720) {
              Verts[i].z=-800/2;
            }
            tmpmodel.setVertex(i, Verts[i].x, Verts[i].y, Verts[i].z);
          }
        }
        // output.println("Verts " + Verts[i] + " locas " +locas[i]);
      }

      pushMatrix();
      rotateY(-k); //-----------------HERE
      translate(Mouse.x, Mouse.y, Mouse.z);
      rotateY(k); //-------------AND HERE
      noFill();
      stroke(255);
      strokeWeight(3);
      box(r, r, 1920);
      popMatrix();


      pushMatrix();
      rotateY(-k); //-----------------HERE
      translate(Mouse2.x, Mouse2.y, Mouse2.z);
      rotateY(k); //-------------AND HERE
      noFill();
      stroke(255);
      strokeWeight(3);
      box(r, r, 1920);
      popMatrix();


      noStroke();

      tmpmodel.draw();

      popMatrix();



      pushMatrix();
      translate(width/2, height/2, 0);
      rotateY(k);
      noFill();
      stroke(255);
      strokeWeight(7);
      box(width, height, 1920);
      popMatrix();
      //output.flush(); // Writes the remaining data to the file
      //output.close(); // Finishes the file
      //saveFrame();
      //println(z);
    }
  }

  spout.sendTexture();
}


void onNewUser(SimpleOpenNI kinect, int userID) {


  kinect.startTrackingSkeleton(userID);
}

//startTracking is alterd
void onEndCalibration(int userId, boolean successful) {
  if (successful) {
    println(" User calibrated !!!");
    kinect.startTrackingSkeleton(userId);
  } else {
    println("  Failed to calibrate user !!!");
    kinect.startTrackingSkeleton(userId);
  }
}

//There is aproblem here,deleted the pose



void keyPressed() {
  if (keyCode == UP) {
    z++;
  }
  if (keyCode == DOWN) {
    z--;
  }
}

Second Example: Here is the same example with above,but i tried to move out the kinect calibration from the general animation of the model,which is difined by RanX,RanY,RanZ and Nor.I tried to solve the problem with a boolean.The problem is that when the Kinect calibrates the user the animation stops.

import peasy.*;
import saito.objloader.*;
import SimpleOpenNI.*;
import spout.*;

//PrintWriter output;
OBJModel model ;
OBJModel Smodel ;
OBJModel tmpmodel ;

Spout spout;

PeasyCam cam;

SimpleOpenNI kinect;


float z=0;
float easing = 0.005;
float r;
float k;
int VertCount;
PVector[] Verts;
PVector[] locas;
PVector Mouse;
PVector Mouse2;
PVector rightHand;
PVector convertedRightHand;
PVector leftHand;
PVector convertedLeftHand;
PVector rightShoulder;
PVector convertedRightShoulder;
PVector leftShoulder;
PVector convertedleftShoulder;
float rightHandZ;
float ConrightHandZ;
float leftHandZ;
float ConleftHandZ;

`boolean Kin = false;`

void setup()
{
  size(640*3, 480*3, P3D);
  frameRate(30);
  noStroke();

  kinect = new SimpleOpenNI(this);

  kinect.enableDepth();
  kinect.enableUser();

  model = new OBJModel(this, "Model2.obj", "absolute", TRIANGLES);
  model.enableDebug();
  model.scale(200);
  model.translateToCenter();


  Smodel = new OBJModel(this, "Model2.obj", "absolute", TRIANGLES);
  Smodel.enableDebug();
  Smodel.scale(200);
  Smodel.translateToCenter();


  tmpmodel = new OBJModel(this, "Model2.obj", "absolute", TRIANGLES);
  tmpmodel.enableDebug();
  tmpmodel.scale(200);
  tmpmodel.translateToCenter();

  //output = createWriter("positions.txt");

  cam = new PeasyCam(this, width/2, height/2, 0, 2300);


  spout = new Spout(this);

  spout.createSender("Self kinect");
}



void draw()
{
  background(0);
  lights();

  kinect.update();
  IntVector userList = new IntVector();
  kinect.getUsers(userList);

  int VertCount = model.getVertexCount ();
  Verts = new PVector[VertCount];
  locas = new PVector[VertCount];
  r =300;
  //k = k + 0.01;




  cam.setMouseControlled(false);







  if (userList.size() > 0) {

    textSize(300);
    text("Start1", width/2-150, height/2-150);
    fill(0, 102, 153);

    int userId = userList.get(0);

    if ( kinect.isTrackingSkeleton(userId)) {
      `Kin=true;`
      textSize(300);
      text("Start2", width/2-150, height/2-150);
      fill(0, 102, 153);

      rightHand = new PVector();
      kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, rightHand);

      convertedRightHand = new PVector();
      kinect.convertRealWorldToProjective(rightHand, convertedRightHand);

      leftHand = new PVector();
      kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, leftHand);

      convertedLeftHand = new PVector();
      kinect.convertRealWorldToProjective(leftHand, convertedLeftHand);


      rightShoulder = new PVector();
      kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, rightShoulder);

      convertedRightShoulder = new PVector();
      kinect.convertRealWorldToProjective(rightShoulder, convertedRightShoulder);


      leftShoulder = new PVector();
      kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, leftShoulder);

      convertedleftShoulder = new PVector();
      kinect.convertRealWorldToProjective(leftShoulder, convertedleftShoulder);


      //output.println(" This is the firstPose "+"rightHand "+rightHand+" leftHand "+leftHand);



      rightHandZ =  map(rightHand.z, 5500, 7500, 1100, 1500);
      ConrightHandZ = map(rightHandZ, 1100, 1500, 0, 1440);

      leftHandZ =map(leftHand.z, 5500, 7500, 1100, 1500);
      ConleftHandZ = map(leftHandZ, 1100, 1500, 0, 1440);

      Mouse = new PVector(rightHand.x, -rightHand.y, z);
      Mouse2 = new PVector(leftHand.x, -leftHand.y, z);

      pushMatrix();
      translate(width/2, height/2, 0);
      pushMatrix();
      rotateY(-k); //-----------------HERE
      translate(Mouse.x, Mouse.y, Mouse.z);
      rotateY(k); //-------------AND HERE
      noFill();
      stroke(255);
      strokeWeight(3);
      box(r, r, 1920);
      popMatrix();


      pushMatrix();
      rotateY(-k); //-----------------HERE
      translate(Mouse2.x, Mouse2.y, Mouse2.z);
      rotateY(k); //-------------AND HERE
      noFill();
      stroke(255);
      strokeWeight(3);
      box(r, r, 1920);
      popMatrix();
      popMatrix();
    }
  }


  //println("rightHand "+rightHand.z+"leftHand "+leftHand.z);
  //println(frameCount);




  pushMatrix();




  translate(width/2, height/2, 0);
  rotateY(k);


  for (int i = 0; i < VertCount; i++) {
    //PVector orgv = model.getVertex(i);

    locas[i]= model.getVertex(i);
    Verts[i]= Smodel.getVertex(i);


    //PVector tmpv = new PVector();


    if (frameCount> 10) {



      float randX = noise(randomGaussian());
      float randY = noise(randomGaussian());
      float randZ = noise(randomGaussian());

      PVector Ran = new PVector(randX, randY, randZ);

      //float norX = abs(cos(k)) * randX;
      //float norY = abs(cos(k)) * randY;
      //float norZ = abs(cos(k)) * randZ;








     ` if (Kin==true) {`
        if ((Verts[i].y > Mouse.y  - r/2 && Verts[i].y < Mouse.y  + r/2 && Verts[i].x > Mouse.x  - r/2 && Verts[i].x < Mouse.x  + r/2 && Verts[i].z > Mouse.z  - 1920/2 && Verts[i].z <  Mouse.z  + 1920/2)||(Verts[i].y > Mouse2.y  - r/2 && Verts[i].y < Mouse2.y  + r/2 && Verts[i].x > Mouse2.x  - r/2 && Verts[i].x < Mouse2.x  + r/2 && Verts[i].z > Mouse2.z  - 1920/2 && Verts[i].z <  Mouse2.z  + 1920/2)) {
          tmpmodel.setVertex(i, locas[i].x, locas[i].y, locas[i].z);
        }
     ` } else if (Kin==true||Kin==false) {   `



        Verts[i].x+=Ran.x;
        Verts[i].y+=Ran.y;
        Verts[i].z+=Ran.z;

        if (Verts[i].x > width/2 ) {
          Verts[i].x=-width/2;
        } else if (Verts[i].x < -width/2) {
          Verts[i].x=width/2;
        }
        if (Verts[i].y > height/2 ) {
          Verts[i].y=-height/2;
        } else if (Verts[i].y < -height/2) {
          Verts[i].y=height/2;
        }

        if (Verts[i].z < -720 ) {
          Verts[i].z=800/2;
        } else if ( Verts[i].z > 720) {
          Verts[i].z=-800/2;
        }
        tmpmodel.setVertex(i, Verts[i].x, Verts[i].y, Verts[i].z);
      }
    }
    // output.println("Verts " + Verts[i] + " locas " +locas[i]);
  }




  noStroke();

  tmpmodel.draw();

  popMatrix();



  pushMatrix();
  translate(width/2, height/2, 0);
  rotateY(k);
  noFill();
  stroke(255);
  strokeWeight(7);
  box(width, height, 1920);
  popMatrix();
  //output.flush(); // Writes the remaining data to the file
  //output.close(); // Finishes the file
  //saveFrame();
  //println(z);
  if (userList.size() < 0) {
    Kin=false;
  }

  spout.sendTexture();
  println(Kin);
}


void onNewUser(SimpleOpenNI kinect, int userID) {


  kinect.startTrackingSkeleton(userID);
}

//startTracking is alterd
void onEndCalibration(int userId, boolean successful) {
  if (successful) {
    println(" User calibrated !!!");
    kinect.startTrackingSkeleton(userId);
  } else {
    println("  Failed to calibrate user !!!");
    kinect.startTrackingSkeleton(userId);
  }
}

//There is aproblem here,deleted the pose



void keyPressed() {
  if (keyCode == UP) {
    z++;
  }
  if (keyCode == DOWN) {
    z--;
  }
}

Help with SimpleOpenNI

$
0
0

So I ame getting the error:

"SimpleOpenNI Error: Can't open device: DeviceOpen using default: no devices found Can't init SimpleOpenNI, maybe the camera is not connected!"

when loading a sketch with SimpleOpenNI. I'm on windows 8.1 with Kinect v1 1414, processing 2.2.1, kinect sdk 1.8, SimpleOpenNI 1.96, OpenNI/NITE Win64 0.27 (from https://code.google.com/archive/p/simple-openni/downloads).

It seems that it the library is not commuicated with the kinect. The kinect works fine as I am able to get it running kinect4winSDK.

I found this

https://forum.processing.org/two/discussion/comment/2677#Comment_2677

And it suggests I should switch out the libfreenect.0.1.2.dylib and libusb-1.0.0.dylib from

https://github.com/kronihias/head-pose-estimation/tree/master/mac-libs

The directory mentioned is for mac, so I havent tried this for my windows 8.1 yet. Not sure where the directory is. And the link supplies MAC libfreenect.0.1.2.dylib and libusb-1.0.0.dylib.

Anyways anyone have experience with this can help guide me? Thanks

Second image doesnt dissapear

$
0
0

Hello,i have this code with two image appearing as instruction,the first appears and dissapears well,but the second image doesnt disappear,i want to be disappear after some seconds,but it doesnt work,i tried boolean,framecount,custom count.but it doesnt disappear.I think maybe the problem is the if tracking statement.

import peasy.*;
import saito.objloader.*;
import spout.*;
import SimpleOpenNI.*;

//PrintWriter output;
OBJModel model ;
OBJModel Smodel ;
OBJModel tmpmodel ;

Spout spout;

SimpleOpenNI kinect;

PeasyCam cam;

float z=0;

float r;
float k;
int VertCount;
PVector[] Verts;
PVector[] locas;
PVector[] Bez;
PVector[] Bez2;
PVector Mouse;
PVector Mouse2;
PVector rightHand;
PVector convertedRightHand;
PVector leftHand;
PVector convertedLeftHand;
PVector rightShoulder;
PVector convertedRightShoulder;
PVector leftShoulder;
PVector convertedleftShoulder;
float rightHandZ;
float ConrightHandZ;
float leftHandZ;
float ConleftHandZ;

boolean Kin;

boolean Text1;
boolean Text2;

PImage img;
PImage img2;


int Tmeter=0;

int VECS = 800;


void setup()
{
  size(1920, 1440, P3D);

  //hypotenuse 2450

  frameRate(30);
  noStroke();

  kinect = new SimpleOpenNI(this);
  kinect.enableDepth();
  kinect.enableUser();

  //800*800-->200,1920*1440-->300
  model = new OBJModel(this, "Model2.obj", "absolute", TRIANGLES);
  model.enableDebug();
  model.scale(300);
  model.translateToCenter();


  Smodel = new OBJModel(this, "Model2.obj", "absolute", TRIANGLES);
  Smodel.enableDebug();
  Smodel.scale(300);
  Smodel.translateToCenter();


  tmpmodel = new OBJModel(this, "Model2.obj", "absolute", TRIANGLES);
  tmpmodel.enableDebug();
  tmpmodel.scale(300);
  tmpmodel.translateToCenter();

  //output = createWriter("positions.txt");

  cam = new PeasyCam(this, width/2, height/2, 0, 2610);
  //800*800 --> cam 1600,1920*1440--> cam 2610
  spout = new Spout(this);
  spout.createSender("Self kinect");

  img = loadImage("Text1.png");
  img2 = loadImage("Text2.png");
}



void draw()
{
  background(0);
  pointLight(255, 255, 255,
  width/2, height/2, width*2);

  kinect.update();
  IntVector userList = new IntVector();
  kinect.getUsers(userList);

  int VertCount = model.getVertexCount ();
  Verts = new PVector[VertCount];
  locas = new PVector[VertCount];
  Bez = new PVector[VertCount];
  Bez2 = new PVector[VertCount];
  r =300;
  k = k + 0.01;



  PVector psVerts[] = new PVector[VECS];
  PVector psVerts2[]= new PVector[VECS];


  cam.setMouseControlled(false);

  if (userList.size() <= 0) {
    Text1 = false;
    Text2= false;
    Tmeter=0;
  }

  if (userList.size() > 0) {


    int userId = userList.get(0);
    Text1 = true;

    if ( kinect.isTrackingSkeleton(userId)) {



      Kin= true;


      Text1 = false;


      Text2=true;



      pushMatrix();
      translate(0, 0, 1255);
      //the hypotenuse

      image(img2, 0, 0);

      popMatrix();
      Tmeter+=1;


      if (Tmeter>=10 ) {
        Text2=false;
        //print("ok");
        //print(Text2);
      }

      print(Tmeter);

      rightHand = new PVector();
      kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, rightHand);

      convertedRightHand = new PVector();
      kinect.convertRealWorldToProjective(rightHand, convertedRightHand);

      leftHand = new PVector();
      kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, leftHand);

      convertedLeftHand = new PVector();
      kinect.convertRealWorldToProjective(leftHand, convertedLeftHand);


      rightShoulder = new PVector();
      kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, rightShoulder);

      convertedRightShoulder = new PVector();
      kinect.convertRealWorldToProjective(rightShoulder, convertedRightShoulder);


      leftShoulder = new PVector();
      kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, leftShoulder);

      convertedleftShoulder = new PVector();
      kinect.convertRealWorldToProjective(leftShoulder, convertedleftShoulder);






      rightHandZ =  map(rightHand.z, 5500, 7500, 1100, 1500);
      ConrightHandZ = map(rightHandZ, 1100, 1500, 0, 1440);

      leftHandZ =map(leftHand.z, 5500, 7500, 1100, 1500);
      ConleftHandZ = map(leftHandZ, 1100, 1500, 0, 1440);

      Mouse = new PVector(rightHand.x, -rightHand.y, z);
      Mouse2 = new PVector(leftHand.x, -leftHand.y, z);


      pushMatrix();
      translate(width/2, height/2, 0);



      pushMatrix();
      //-----------------HERE
      translate(Mouse2.x, Mouse2.y, Mouse2.z);
      //-------------AND HERE
      noFill();
      stroke(255);
      strokeWeight(3);
      box(r, r, 2450);
      popMatrix();



      pushMatrix();
      //-----------------HERE
      translate(Mouse.x, Mouse.y, Mouse.z);
      //-------------AND HERE
      noFill();
      stroke(255);
      strokeWeight(3);
      box(r, r, 2450);
      popMatrix();


      popMatrix();
    }
  }

  pushMatrix();




  translate(width/2, height/2, 0);



  for (int i = 0; i < VertCount; i++) {
    //PVector orgv = model.getVertex(i);

    locas[i]= model.getVertex(i);
    Verts[i]= Smodel.getVertex(i);


    //PVector tmpv = new PVector();





    //+4 for 1920*1440
    float randX = noise(randomGaussian())+4;
    float randY = noise(randomGaussian())+4;
    float randZ = noise(randomGaussian())+4;

    PVector Ran = new PVector(randX, randY, randZ);




    Verts[i].x+=Ran.x;
    Verts[i].y+=Ran.y;
    Verts[i].z+=Ran.z;

    if (Verts[i].x > width/2 ) {
      Verts[i].x=-width/2;
    } else if (Verts[i].x < -width/2) {
      Verts[i].x=width/2;
    }
    if (Verts[i].y > height/2 ) {
      Verts[i].y=-height/2;
    } else if (Verts[i].y < -height/2) {
      Verts[i].y=height/2;
    }

    if (Verts[i].z < -width/2 ) {
      Verts[i].z=800/2;
    } else if ( Verts[i].z > width/2) {
      Verts[i].z=-width/2;
    }


    pushMatrix();
    translate(width/2, height/2, 0);
    rotateY(k);
    tmpmodel.setVertex(i, Verts[i].x, Verts[i].y, Verts[i].z);
    popMatrix();




    if (Kin==true) {
      if ((Verts[i].y > Mouse.y  - r/2 && Verts[i].y < Mouse.y  + r/2 && Verts[i].x > Mouse.x  - r/2 && Verts[i].x < Mouse.x  + r/2 && Verts[i].z > Mouse.z  - 1225 && Verts[i].z <  Mouse.z  + 1225)||(Verts[i].y > Mouse2.y  - r/2 && Verts[i].y < Mouse2.y  + r/2 && Verts[i].x > Mouse2.x  - r/2 && Verts[i].x < Mouse2.x  + r/2 && Verts[i].z > Mouse2.z  - 1225 && Verts[i].z <  Mouse2.z  + 1225)) {




        pushMatrix();
        rotateY(k);


        tmpmodel.setVertex(i, locas[i].x, locas[i].y, locas[i].z);
        popMatrix();
      }
    }
  }




  rotateY(k);
  noStroke();
  tmpmodel.draw();








  popMatrix();
  if (Text1==true) {

    pushMatrix();
    translate(0, 0, 1255);
    //the hypotenuse

    image(img, 0, 0);

    popMatrix();
  }





  spout.sendTexture();
  //println(Tmeter);
}




void onNewUser(SimpleOpenNI kinect, int userID) {
  kinect.startTrackingSkeleton(userID);
}


void onEndCalibration(int userId, boolean successful) {
  if (successful) {
    println(" User calibrated !!!");
    kinect.startTrackingSkeleton(userId);
  } else {
    println("  Failed to calibrate user !!!");
    kinect.startTrackingSkeleton(userId);
  }
}

using kinect for background subtraction and as camera, as a substitute for live green screen

$
0
0

Hi there, I would like to use kinect as a substitute for green screen and send it to resolume for layering. Is there a patch available for combining depth image and rgb image? Also, I currently have kinect v.1 1473 model, as I understand it's buggy. Do you think it's better to go for kinect v.2 for this? Thanks,

Face Tracking

$
0
0

Hi friends! I'm looking for a way of track a moving face on a camera. I need a basic tracking, just one face at a time.

I tried Ketai library, but it works only in Android.

Searching on the web I found some libraries for Javascript, like this one TRACKINGJS but I need something with more stability and performance.

Ideas and recomendations are welcome, thanks!

Interactive Mirror

$
0
0

Hi guys, I just want to know if its possible to do an (interactive mirror) with processing and Kinect. Interactive mirror is for changing clothes at stores, I had seen many examples but is it possible with processing ?!

Viewing all 530 articles
Browse latest View live


Latest Images