Quantcast
Channel: Kinect - Processing 2.x and 3.x Forum
Viewing all 530 articles
Browse latest View live

How Do I make the sketch full screen?

$
0
0

Seems like this wouldn't be that hard the find the answer to but I have been searching for hours and I cant find one.

On processing 3.2.1 and I'm trying to run my kinect sketch in fullscreen, which it will go to but the kinect data stays at 640x480. I have v1 1414


Is there a way to make a Kinect do motion tracking with dancers with particle delay?

$
0
0

For a school project my theater and dance classes want me to create a dance projection wall. I have been able to find essentially no information regarding the subject and was hoping you all on the forums might be able to help. If there is any guidance you can give it would be helpful.

is it possible to make with Processing?

$
0
0

hi, I'm student who first time to try programming and use processing. so, I don't have any idea about this tool.

so I want to know about my Question before I start my project.

it's an example link.

image

holodesk by MS.

btw, It's used 'kinect', but I'll use 'leap motion' instead 'kinect'

Minim Ugens AudioOutput, playNote only once

$
0
0

Hi, I am working on a sketch that recognises faces and plays notes depending on the location/size of said faces. However, currently the playNote function just repeatedly plays the same note over and over, as opposed to just once.

I realise that the problem currently is that the only way I could find to use the parameters of the face detection rects is to put the playNote in the same brackets as them. Meaning that of course, as the rects are being permanently drawn, so are the notes being permanently created.

How can I have a playNote function that runs on the faces[i] parameters but doesn't have to be in the same 'for' brackets?

All help is greatly appreciated, here is my code currently

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

// audio bits
import ddf.minim.*;
import ddf.minim.ugens.*;
AudioOutput out;

Capture video;
OpenCV opencv;

void setup() {
  size(640, 480);
  video = new Capture(this, 640/2, 480/2);
  opencv = new OpenCV(this, 640/2, 480/2);
  opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

  video.start();

  Minim minim = new Minim(this);
  out = minim.getLineOut();
}

void draw() {
  scale(2);
  opencv.loadImage(video);

  image(video, 0, 0 );

  noFill();
  stroke(0, 255, 0);
  strokeWeight(3);
  Rectangle[] faces = opencv.detect();
  println(faces.length);

  for (int i = 0; i < faces.length; i++) {
    println(faces[i].x + "," + faces[i].y);
    out.playNote(0, 1, (faces[i].width*2));
    rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
  }
}

void captureEvent(Capture c) {
  c.read();
}

OpenCV - How can I ad a different random image on every face?

$
0
0

I want to write a Code, where every face gets another random image out of 10 images I have. I struggle to separate the different faces. Every time I try, the same image appears on all faces. Would be great, if someone can give me a hint how to separate the faces for the Code.

You will see, I was a bit desperate. I see the mistake, its a big one I know. But I can't see the solution.

import gab.opencv.*;
import processing.video.*;
import java.awt.*;
 int num = 3;
 PImage[] myImageArray = new PImage[num];
 Capture video;
 OpenCV opencv;

 void setup() {

for (int i=0; i<myImageArray.length; i++){
   myImageArray[i] = loadImage( str(i) + ".png");
 }

  size(800, 600);
   video = new Capture(this, 800/2, 600/2);
   opencv = new OpenCV(this, 800/2, 600/2);
   opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);

   video.start();
 }

 void draw() {
   scale(2);
  opencv.loadImage(video);
   image(video, 0, 0 );

  Rectangle[] faces = opencv.detect();
   println(faces.length);                    //print number of faces
   for (int i = 0; i < faces.length; i++) {
     println(faces[i].x + "," + faces[i].y); //print position(x/y) of faces
  image(myImageArray[(int)random(num)], faces[i].x-70, faces[i].y-60, faces[i].width+80, faces[i].height+80);
 }
}

 void captureEvent(Capture c) {
   c.read();
 }

OpenCV Face Detection from A Video File

$
0
0

Hi All, I'm trying to do face detection with OpenCV for Processing from a video file. I've searched through examples on the web but all of them seem to be focusing on face detection from a live feed. Is there a way to do it from a video file? I know that you can create Movie objects using the Processing Video library but can I perhaps use that class in an Open CV object? I appreciate any tips and pointers.

kinect not found

Kinect Logo Tracking

$
0
0

Hello all,

I have a project to live track a guitarist with kinect and from the neck of the guitar to emit some particles. I do not know if this is possible whit Kinectv2 and Processing? I was thinking to use a Logo or QR Code as a tracker on the end of the guitar so the kinect should track only that image and emit from that point.

There is any lib for this idea? Any suggestions?

Thank you


How to run stereo vision on a stereo video data

$
0
0

Hi,

I am new to openCV and want to run stereo vision on a stereo video data using openCV+visual studio. I want to show results by using color keys for disparities.

Does any one know from where to start ? While doing some net surfing, i came across few third party libraries too to achieve it.

Do we really need to use them ?

Can we achieve it using openCV itself ?

I would really appreciate any help !!

Gesture recognition and interactive animation using Kinect

$
0
0

Hi all,

I am planning to make an interactive installation / video wall as a graduation project but I’m complete noob in programming. My idea is to make e.g. 4 different animations with After Effects and use them in an interactive installation that allows people to influence which animation plays using Kinect and gesture recognition. I’m wondering if I should buy Kinect 1 or 2? I know the basics of Processing but should I look into Processing or OpenFrameworks with this project? Also I would be extremely grateful for all kinds of tips on where to start! Thanks!

Kinect1 Disconnection Issue with Processing 3

$
0
0

Hello, After all the steps to install Kinect1 library by referencing https://github.com/shiffman/OpenKinect-for-Processing, I am using Kinect1 on Processing 3. If I run the program which uses Kinect1, it works fine at the first place. However, at some point (sometimes after 2 minutes or sometimes after 2 hours), Kinect 1 gets disconnected, even though the program still runs. The program doesn't crash but since the Kinect is disconnected the user cannot proceed in the application. In fact, when I run the program, the command screen keeps showing a line, "isochronous transfer error: 1". Even if this line keeps coming up, Kinect works properly before it gets disconnected at some point and it never gets recovered.

So, I wonder what the problem is for the case and how I could avoid disconnection problem. Also, I wonder if there is a way to reconnect Kinect1 after disconnection.

SimpleOpenNI doesn't work in my Windows 10

$
0
0

Hello, I am using Windows 10 on Intel NUC (Model: NUC6i5SYH). I downloaded Processing 2 and Processing 3. I wanted to run any SimpleOpenNI example codes in Processing 2, but it couldn't start at all. I wondered if it was a library issue, so I copied the same SimpleOpenNI library and tried several example codes in another Windows 10 computer. And it worked great without any problems.

So, I wonder why SimpleOpenNI is not able to run in Processing 2 in my Windows 10 computer or how to solve it. Thank you!

kinect v2 for background subtraction

$
0
0

Hi there! I've got kinect v2 on mac and sending to resolume. I would like to send either the registered image or a blend of depth image and rgb image so that I can use the video of the person moving with alpha. I have managed to send two separate syphon feeds into resolume, one for the depth image and one for the rgb, but how do I combine them, so that I receive just one image into resolume? There has to be the option of playing with the threshold of the depth data as well.

Thanks!

SimpleOpenNI Library error

$
0
0

/* -------------------------------------------------------------------------- * SimpleOpenNI UserCoordsys Test * -------------------------------------------------------------------------- * Processing Wrapper for the OpenNI/Kinect library * http://code.google.com/p/simple-openni * -------------------------------------------------------------------------- * prog: Max Rheiner / Interaction Design / zhdk / http://iad.zhdk.ch/ * date: 05/06/2012 (m/d/y) * ---------------------------------------------------------------------------- * This example shows how to setup a user defined coordiate system. * You have to devine the new nullpoint + the x/z axis. * This can be also usefull if you work with two independend cameras * ---------------------------------------------------------------------------- */

import SimpleOpenNI.*;

final static int CALIB_START = 0; final static int CALIB_NULLPOINT = 1; final static int CALIB_X_POINT = 2; final static int CALIB_Z_POINT = 3; final static int CALIB_DONE = 4;

SimpleOpenNI context; boolean screenFlag = true; int calibMode = CALIB_START;

PVector nullPoint3d = new PVector(); PVector xDirPoint3d = new PVector(); PVector zDirPoint3d = new PVector(); PVector tempVec1 = new PVector(); PVector tempVec2 = new PVector(); PVector tempVec3 = new PVector();

PMatrix3D userCoordsysMat = new PMatrix3D();

void setup() {
size(640, 480); smooth();

context = new SimpleOpenNI(this);

context.setMirror(false);

// enable depthMap generation if (context.enableDepth() == false) { println("Can't open the depthMap, maybe the camera is not connected!"); exit(); return; }

if (context.enableRGB() == false) { println("Can't open the rgbMap, maybe the camera is not connected or there is no rgbSensor!"); exit(); return; }

// align depth data to image data context.alternativeViewPointDepthToImage();

// Create the font textFont(createFont("Georgia", 16)); }

void draw() {
// update the cam context.update();

if (screenFlag) image(context.rgbImage(), 0, 0); else image(context.depthImage(), 0, 0);

// draw text background pushStyle(); noStroke(); fill(0,200,0,100); rect(0,0,width,40); popStyle();

switch(calibMode) { case CALIB_START: text("To start the calibration press SPACE!", 5, 30); break; case CALIB_NULLPOINT: text("Set the nullpoint with the left mousebutton", 5, 30); break; case CALIB_X_POINT: text("Set the x-axis with the left mousebutton", 5, 30); break; case CALIB_Z_POINT: text("Set the z-axis with the left mousebutton", 5, 30); break; case CALIB_DONE: text("New nullpoint is defined!", 5, 30); break; }

// draw drawCalibPoint();

// draw the user defined coordinate system // with the size of 500mm if (context.hasUserCoordsys()) { PVector temp = new PVector(); PVector nullPoint = new PVector();

pushStyle();

strokeWeight(3);
noFill();

context.convertRealWorldToProjective(new PVector(0, 0, 0), tempVec1);
stroke(255, 255, 255, 150);
ellipse(tempVec1.x, tempVec1.y, 10, 10);

context.convertRealWorldToProjective(new PVector(500, 0, 0), tempVec2);
stroke(255, 0, 0, 150);
line(tempVec1.x, tempVec1.y,
tempVec2.x, tempVec2.y);

context.convertRealWorldToProjective(new PVector(0, 500, 0), tempVec2);
stroke(0, 255, 0, 150);
line(tempVec1.x, tempVec1.y,
tempVec2.x, tempVec2.y);

context.convertRealWorldToProjective(new PVector(0, 0, 500), tempVec2);
stroke(0, 0, 255, 150);
line(tempVec1.x, tempVec1.y,
tempVec2.x, tempVec2.y);

popStyle();

} }

void drawCalibPoint() { pushStyle();

strokeWeight(3); noFill();

switch(calibMode) { case CALIB_START:
break; case CALIB_NULLPOINT: context.convertRealWorldToProjective(nullPoint3d, tempVec1);

stroke(255, 255, 255, 150);
ellipse(tempVec1.x, tempVec1.y, 10, 10);
break;

case CALIB_X_POINT: // draw the null point context.convertRealWorldToProjective(nullPoint3d, tempVec1); context.convertRealWorldToProjective(xDirPoint3d, tempVec2);

stroke(255, 255, 255, 150);
ellipse(tempVec1.x, tempVec1.y, 10, 10);

stroke(255, 0, 0, 150);
ellipse(tempVec2.x, tempVec2.y, 10, 10);
line(tempVec1.x, tempVec1.y, tempVec2.x, tempVec2.y);

break;

case CALIB_Z_POINT:

context.convertRealWorldToProjective(nullPoint3d, tempVec1);
context.convertRealWorldToProjective(xDirPoint3d, tempVec2);
context.convertRealWorldToProjective(zDirPoint3d, tempVec3);

stroke(255, 255, 255, 150);
ellipse(tempVec1.x, tempVec1.y, 10, 10);

stroke(255, 0, 0, 150);
ellipse(tempVec2.x, tempVec2.y, 10, 10);
line(tempVec1.x, tempVec1.y, tempVec2.x, tempVec2.y);

stroke(0, 0, 255, 150);
ellipse(tempVec3.x, tempVec3.y, 10, 10);
line(tempVec1.x, tempVec1.y, tempVec3.x, tempVec3.y);

break;

case CALIB_DONE:

break;

}

popStyle(); }

void keyPressed() { switch(key) { case '1': screenFlag = !screenFlag; break; case ' ': calibMode++; if (calibMode > CALIB_DONE) { calibMode = CALIB_START; context.resetUserCoordsys(); } else if (calibMode == CALIB_DONE) {
// set the calibration context.setUserCoordsys(nullPoint3d.x, nullPoint3d.y, nullPoint3d.z, xDirPoint3d.x, xDirPoint3d.y, xDirPoint3d.z, zDirPoint3d.x, zDirPoint3d.y, zDirPoint3d.z);

  println("Set the user define coordinatesystem");
  println("nullPoint3d: " + nullPoint3d);
  println("xDirPoint3d: " + xDirPoint3d);
  println("zDirPoint3d: " + zDirPoint3d);

  /*
  // test
  context.getUserCoordsysTransMat(userCoordsysMat);
  PVector temp = new PVector();

  userCoordsysMat.mult(new PVector(0, 0, 0), temp);
  println("PVector(0,0,0): " + temp);

  userCoordsysMat.mult(new PVector(500, 0, 0), temp);
  println("PVector(500,0,0): " + temp);

  userCoordsysMat.mult(new PVector(0, 500, 0), temp);
  println("PVector(0,500,0): " + temp);

  userCoordsysMat.mult(new PVector(0, 0, 500), temp);
  println("PVector(0,0,500): " + temp);
  */
}

break;

} }

void mousePressed() { if (mouseButton == LEFT) { PVector[] realWorldMap = context.depthMapRealWorld(); int index = mouseX + mouseY * context.depthWidth();

switch(calibMode)
{
case CALIB_NULLPOINT:
  nullPoint3d.set(realWorldMap[index]);
  break;
case CALIB_X_POINT:
  xDirPoint3d.set(realWorldMap[index]);
  break;
case CALIB_Z_POINT:
  zDirPoint3d.set(realWorldMap[index]);
  break;
}

} else { PVector[] realWorldMap = context.depthMapRealWorld(); int index = mouseX + mouseY * context.depthWidth();

println("Point3d: " + realWorldMap[index].x + "," + realWorldMap[index].y + "," + realWorldMap[index].z);

} }

void mouseDragged() { if (mouseButton == LEFT) { PVector[] realWorldMap = context.depthMapRealWorld(); int index = mouseX + mouseY * context.depthWidth();

switch(calibMode)
{
case CALIB_NULLPOINT:
  nullPoint3d.set(realWorldMap[index]);
  break;
case CALIB_X_POINT:
  xDirPoint3d.set(realWorldMap[index]);
  break;
case CALIB_Z_POINT:
  zDirPoint3d.set(realWorldMap[index]);
  break;
}

}

}

each time i try to run this code i get this error: Can't load SimpleOpenNI library (SimpleOpenNI64) : java.lang.UnsatisfiedLinkError: C:\Users\maryl\OneDrive\Documents\Processing\libraries\SimpleOpenNI\library\SimpleOpenNI64.dll: Can't find dependent libraries Verify if you installed SimpleOpenNI correctly. http://code.google.com/p/simple-openni/wiki/Installation A library relies on native code that's not available. Or only works properly when the sketch is run as a 32-bit application.

Detect if point of Blob is inside point of Array of Buttons


Shape primitive flickering while multithreading

$
0
0

Hello everyone!

First time posting so a little bit nervous :-)

The thing is that I'm doing an application with kinect and my first approach was to have both depth detection/processing and drawing in the same thread. Worked fine but sometimes it draws a bit slow.

Now I moved the Kinect detection/processing part to another thread. This has the typical loop of going through the kinect.getRawDepth() array and draw in another PImage the pixels which are enough close to the camera. So I'm doing ther pixel manipulation etc. The problem is that since I did that, the primitive shapes drawed in the main loop are flickering like hell. This only happens with primitives, pictures, images are totally fine.

Also, if I comment only the part of the for loop, everything is fine. Any ideas why this is happening? How to solving it? Thank you very much in advance!

This is the code of run() inside thread:

void run(){
  while(true){
    if(!calibrating){
      raw_depths = kinect.getRawDepth();
      for (int i = 0; i < raw_depths.length; i++) {
        if(raw_depths[i] > LIMIT) depth_image.pixels[i] = color(0);
        else depth_image.pixels[i] = color(255);
      }

      depth_image.updatePixels();

      opencv.loadImage(depth_image);
      opencv.threshold(ERODE_THR);
      opencv.erode();
      depth_image = opencv.getSnapshot();
      // Compute BLOBS
      theBlobDetection.computeBlobs(depth_image.pixels);
    }
  }
}

Cannot find class or type named "EXAMPLE"

$
0
0

I keep getting this error, and I don't know why it is not working.

// Daniel Shiffman // Tracking the average location beyond a given depth threshold // Thanks to Dan O'Sullivan

// https://github.com/shiffman/OpenKinect-for-Processing // http://shiffman.net/p5/kinect/

ParticleSystem ps; import org.openkinect.freenect.*; import org.openkinect.processing.*;

PImage texture; // The kinect stuff is happening in another class KinectTracker tracker; Kinect kinect;

void setup() { size(640, 640); kinect = new Kinect(this); tracker = new KinectTracker(); ps = new ParticleSystem(0,new PVector(width/2,height-60),texture); }

void draw() { background(0); // Run the tracking analysis tracker.track(); // Show the image tracker.display(); PVector v1 = tracker.getPos(); PImage texture = loadImage("texture.png"); texture.resize(200, 200); image(texture, v1.x -100, v1.y -100); // fill(#ff9999); //ellipse(v1.x, v1.y, 100, 100);

// Display some info int t = tracker.getThreshold(); fill(255); text("threshold: " + t + " " + "framerate: " + int(frameRate) + " " + "UP increase threshold, DOWN decrease threshold", 10, 500);

// Calculate a "wind" force based on mouse horizontal position float dx = map(mouseX,0,width,-0.2,0.2); PVector wind = new PVector(dx,0); ps.applyForce(wind); ps.run(); for (int i = 0; i < 2; i++) { ps.addParticle(); }

// Draw an arrow representing the wind force drawVector(wind, new PVector(width/2,50,0),500);

}

// Renders a vector object 'v' as an arrow and a location 'loc' void drawVector(PVector v, PVector loc, float scayl) { pushMatrix(); float arrowsize = 4; // Translate to location to render vector translate(loc.x,loc.y); stroke(255); // Call vector heading function to get direction (note that pointing up is a heading of 0) and rotate rotate(v.heading()); // Calculate length of vector & scale it to be bigger or smaller if necessary float len = v.mag()*scayl; // Draw three lines to make an arrow (draw pointing up since we've rotate to the proper direction)

popMatrix(); }

class ParticleSystem {

ArrayList particles; // An arraylist for all the particles PVector origin; // An origin point for where particles are birthed PImage img;

ParticleSystem(int num, PVector v, PImage img_) { particles = new ArrayList(); // Initialize the arraylist origin = v.get(); // Store the origin point img = img_; for (int i = 0; i < num; i++) { particles.add(new Particle(origin, img)); // Add "num" amount of particles to the arraylist } }

void run() { for (int i = particles.size()-1; i >= 0; i--) { Particle p = particles.get(i); p.run(); if (p.isDead()) { particles.remove(i); } } }

void run() { for (int i = particles.size()-1; i >= 0; i--) { Particle p = particles.get(i); p.run(); if (p.isDead()) { particles.remove(i); } } }

// Method to add a force vector to all particles currently in the system void applyForce(PVector dir) { // Enhanced loop!!! for (Particle p: particles) { p.applyForce(dir); }

}

void addParticle() { particles.add(new Particle(origin,img)); }

}

void run() { update(); render(); }

// Method to apply a force vector to the Particle object // Note we are ignoring "mass" here void applyForce(PVector f) { acc.add(f); }

// Method to update location void update() { vel.add(acc); loc.add(vel); lifespan -= .5; acc.mult(0); // clear Acceleration }

// Method to display void render() { tint(255,lifespan); image(texture,loc.x,loc.y-200); // Drawing a circle instead // fill(255,lifespan); // noStroke(); // ellipse(loc.x,loc.y,img.width,img.height); }

// Is the particle still useful? boolean isDead() { if (lifespan <= 0.0) { return true; } else { return false; }

}

// Adjust the threshold with key presses void keyPressed() { int t = tracker.getThreshold(); if (key == CODED) { if (keyCode == UP) { t+=5; tracker.setThreshold(t); } else if (keyCode == DOWN) { t-=1000; tracker.setThreshold(t); } }

}

Hand tracking with Kinect/Processing

$
0
0

Hi,

I am trying to make a project with Processing and the Kinect, I already installed the right library (I use OpenNI and FingerTracker), everything seems to work. I followed a tutorial, which showed how to make the kinect detect our hands, especially our fingers. It's this one :

// import the fingertracker library // and the SimpleOpenNI library for Kinect access import fingertracker.*; import SimpleOpenNI.*;

// declare FignerTracker and SimpleOpenNI objects FingerTracker fingers; SimpleOpenNI kinect; // set a default threshold distance: // 625 corresponds to about 2-3 feet from the Kinect int threshold = 625;

void setup() { size(640, 480);
// initialize your SimpleOpenNI object // and set it up to access the depth image kinect = new SimpleOpenNI(this); kinect.enableDepth(); // mirror the depth image so that it is more natural kinect.setMirror(true);

// initialize the FingerTracker object // with the height and width of the Kinect // depth image fingers = new FingerTracker(this, 640, 480); // the "melt factor" smooths out the contour // making the finger tracking more robust // especially at short distances // farther away you may want a lower number fingers.setMeltFactor(100); }

void draw() { // get new depth data from the kinect kinect.update(); // get a depth image and display it PImage depthImage = kinect.depthImage(); image(depthImage, 0, 0);

// update the depth threshold beyond which // we'll look for fingers fingers.setThreshold(threshold);
// access the "depth map" from the Kinect // this is an array of ints with the full-resolution // depth data (i.e. 500-2047 instead of 0-255) // pass that data to our FingerTracker int[] depthMap = kinect.depthMap(); fingers.update(depthMap);

// iterate over all the contours found // and display each of them with a green line stroke(0,255,0); for (int k = 0; k < fingers.getNumContours(); k++) { fingers.drawContour(k); }
// iterate over all the fingers found // and draw them as a red circle noStroke(); fill(255,0,0); for (int i = 0; i < fingers.getNumFingers(); i++) { PVector position = fingers.getFinger(i); ellipse(position.x - 5, position.y -5, 10, 10); }
// show the threshold on the screen fill(255,0,0); text(threshold, 10, 20); }

// keyPressed event: // pressing the '-' key lowers the threshold by 10 // pressing the '+/=' key increases it by 10 void keyPressed(){ if(key == '-'){ threshold -= 10; }
if(key == '='){ threshold += 10; } }

Everything works great, but I'm trying to make it detect when my fingers are on a certain location of the window. I am creating a picture with Photoshop, which will be displayed on the screen in Processing, and I want the JPG to have locations in which several things happen when my fingers touch these spaces (for example some objects which appear suddenly, other windows opening...). Is it possible ? How can I make it ?

Thank you for your future answers.

Is it possible to live stream video from the Kinect V2 depth & RGB camera ?

$
0
0

Hi there, I'm currently doing some research on holographic projection technology. I was searching on how to access the Kinect V2 and how to live stream from it's cameras. I was wondering if it's possible to lives stream by combining the RGB and Depth camera at the same time, my goal is lo live stream a full body 3D mesh, something like the guys at https://www.mimesysvr.com/ are doing.

SimpleOpenNI Library error occurs after update processing

$
0
0

I am trying to make a project with kinect v.1 and processing 3.2.1. Before I updated the processing, everything was working (I just tried the "Hello" examples of Daniel Shiffman). But now I use processing 3.2.1 and "SimpleOpenNI library can not find" error occurs. I deleted all libraries and downloaded them again. I download the library from this link: https://code.google.com/archive/p/simple-openni/downloads and even I download the library in ..sketchfolder/libraries , I can't run the example codes. I am using Windows 10. How can I fix this problem? I just want to run the example codes simply.

Viewing all 530 articles
Browse latest View live