Quantcast
Channel: Kinect - Processing 2.x and 3.x Forum
Viewing all 530 articles
Browse latest View live

kinect2 fail on export

$
0
0

Hello list, i'm trying to make an executable form the examples provided with the org.openkinect.processing.Kinect2 library.

When I start the app I see only the grey screen, it seem it stuck on the setup.

I'm working on yosemite 10.10.4 and processing 3.0a11.

any suggestion?

Thanks, Bruno.


Is only kinect the only harware I need to make interactive arts to TV/LCD displays?

$
0
0

So it would be processing + kinect + LCD or TV and its already ready to go ?

any good book or who teach these kind of things?

Good book or site to learn applicable interactive using kinect application?

$
0
0

Is processing software enough to make something like this : image image

does anyone know how to do practical installment for example in public places ?

what hardware do I need : Kinect also does it run on a computer or it is already imbedded in a small micro chip thew program I mean how would it function to be done on large screen or displays

anyone know any good books or site that teach practical application of graphic for mall or reality displays ?

Render HTML page using Processing.

$
0
0

Hi there, I looking for help to solve a performance scenario. I started developing creative coding applications using processing, but last year I moved to Openframeworks, so I'm not updated over here.

I need render an HTML page, that is served on my local machine, and control them using kinect and export using syphon to MadMapper.

I'm doing this using Openframeworks: ofxAwesomium to render a html, that is served by ofxHttp. In those pages (that are a kind of presentation), I use the ofxNI2 and ofxSyphon to control by gestures and export to MadMapper. The problem is, using ofxAwesomium, the render is flickering, some buttons are blinking (when I hover them) and the result is kind of defect.

If open the link straight on any browser, everything is running fine. No flickering, no bugs, no blinks. So, I think the problem could be the ofxAwesomium, that performs bad with low frame rate.

Anyone experienced with that situation? Do I can do this using some processing library? Quartz Composer, Isadora or Vue can handle this? Thks in advance.

Silhouettes from kinect. SimpleOpenNI doesent work on processing 3?

kinect.enableUser(SimpleOpenNI.SKEL_PROFILE_ALL);

$
0
0

I have face a problem while going to generate skeleton tracking. It come error message can not find anything named "SimpleOpenNI.SKEL_PROFILE_ALL" etc... I have import SimpleOpenNI but didn't work.

My code is shown bellow please give me a solution for my problem.

import SimpleOpenNI.*; SimpleOpenNI kinect;

public void setup() { kinect = new SimpleOpenNI(this); kinect.setMirror(true); kinect.enableDepth(); //--kinect.enableUser(SimpleOpenNI.SKEL_PROFILE_ALL); size(kinect.depthWidth(), kinect.depthHeight()); }

public void draw() { kinect.update(); image(kinect.depthImage(), 0, 0); if (kinect.isTrackingSkeleton(1)) { drawSkeleton(1); } }

void drawSkeleton(int userId) { pushStyle(); stroke(255,0,0); strokeWeight(3); kinect.drawLimb(userId, SimpleOpenNI.SKEL_HEAD, SimpleOpenNI.SKEL_NECK); kinect.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_LEFT_SHOULDER); kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW); kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, SimpleOpenNI.SKEL_LEFT_HAND); kinect.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_RIGHT_SHOULDER); kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW); kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, SimpleOpenNI.SKEL_RIGHT_HAND); kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_TORSO); kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_TORSO); kinect.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_LEFT_HIP); kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HIP, SimpleOpenNI.SKEL_LEFT_KNEE); kinect.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_KNEE, SimpleOpenNI.SKEL_LEFT_FOOT); kinect.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_RIGHT_HIP); kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HIP, SimpleOpenNI.SKEL_RIGHT_KNEE); kinect.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_KNEE, SimpleOpenNI.SKEL_RIGHT_FOOT); popStyle(); }

public void onNewUser(int userId) { println("onNewUser - userId: " + userId); if (kinect.isTrackingSkeleton(1)) return; println(" start pose detection"); kinect.startPoseDetection("Psi", userId); }

public void onLostUser(int userId) { println("onLostUser - userId: " + userId); }

public void onStartPose(String pose, int userId) { println("onStartPose - userId: " + userId + ", pose: " + pose); println(" stop pose detection"); //--kinect.stopPoseDetection(userId); //--kinect.requestCalibrationSkeleton(userId, true); }

public void onEndPose(String pose, int userId) { println("onEndPose - userId: " + userId + ", pose: " + pose); }

public void onStartCalibration(int userId) { println("onStartCalibration - userId: " + userId); }

public void onEndCalibration(int userId, boolean successfull) { println("onEndCalibration - userId: " + userId + ", successfull: " + successfull);

if (successfull) { println(" User calibrated !!!"); kinect.startTrackingSkeleton(userId); }

else { println(" Failed to calibrate user !!!"); println(" Start pose detection"); //--kinect.startPoseDetection("Psi", userId); } }

SimpleOpenNI library not working

$
0
0

Hi,

I recently got a hold of an old Xbox 360 Kinect that a friend of mine wasn't using anymore so I started to wonder what interesting things I could do with it, and that's how I heard about Kinect + Processing + Arduino. The thing is I can't get the SimpleOpenNI library to work on my computer.

I am running Processing 2.2.1+ OpenNi 2.2 + SimpleOpenNI 1.96 + Kinect SDK 1.8 on a Windows 8.1 64bit PC.

I install the library just like the google code page says and it installs correctly (or so it looks like), but whenever I try to Run the example code from the google code page I get the following error message:

Captura

I tried using different versions of processing, different versions of simpleopenni, different versions of openni, but ALWAYS get the same error message.

Has anyone here had such a problem? I would greatly appreciate any input on the subject.

Thanks in advance, Cesar

Multiple users in kinect + processing

$
0
0

Hello! I am trying to build a Processing + Kinect sketch influenced by gestures/positions made by multiple users. So far I didn't get any further than

kinect.getNumberOfUsers();

I couldn't find any other documentation on this matter either on Github or Processing Forum.

Does anyone have an idea of what I could use/read/try?

thank you for any help!


How to calculate distance between usermap and an object on Kinect2 ?

$
0
0

Hello there, I wrote a simple program I needed 6 months ago with K1: calculate "usermap" > draw particles in the scene > if the distance between the silhouette [userMapX & userMapY] and a particle is less than 10px, particles color is green, else particles color is red. Now I'm trying to achieve the same thing with Kinect 2 and the lib KinectPV2 on Processing 3 but I feel a little bit stuck here. I wrote a code to test it, it's suppose to change the color of a square if the distance between the usermap and the square is less than 20 (see variable f). If you test it, you will see that the square changes color depending on the way you move with the kinect, but it doesn't match with the body tracked.. Any advice or help with this ?

Charles -


    import KinectPV2.*;
    KinectPV2 kinect;
    int f = 20;
    float dist;
    int c;
    boolean detect;
    
    void setup() {   
      size(512, 424);
      frameRate(30);
      kinect = new KinectPV2(this);   
      kinect.enableBodyTrackImg(true);
      kinect.enableDepthImg(false);
      kinect.init();
    }  
    
    void draw() {   
    background(255);    
    image(kinect.getBodyTrackImage(), 0, 0);   
    image(kinect.getDepthImage(), 0, 0);
    int [] rawData = kinect.getRawBodyTrack();
    
    //For 1/8 of my pixels
    for (int i = 0; i < rawData.length; i +=2){
    
      //if pixel is part of the user
      if (rawData[i] != 255){
         float x = (i%width)+1;
         float y = int(i/height);
        
        //draw body with blue lines
        stroke(0,0,255);
        line(x,y,x,y);
    
        //calculate distance between usermapx, usermapy, and rectangle position
        dist = dist(x,f,y,f);
        //if collision found, rectangle is green 
        if (dist < = f) {
         detect=true;
         //else, red
        } else{
          detect=false;
        }
      }
    
    }
    if (detect == false){
      c = color(255,0,0);
    }
    
    if (detect == true){
     c =  color(0,255,0);
    }
    
    pushMatrix();
    fill(c);
    rect(f,f,50,50);
    popMatrix();
    } 

projection mapping with AR tracking - almost there

$
0
0

My objective was to use the AR tracking to get the 3d world position of a marker. Than, using a projector, I wanted to project a cross in the center of this marker, while I move it.

My code is "almost" working. The projection follows the object on axis X, but still, the projection is not over the marker. The Y axis seems to be a little odd.

The code is fully commented.. Can someone please point me some directions?

/**
 * Projection Mapping Matrix Corrspondence Study
 *
 * Turn the projection to a position where camera cam capture the whole frame 
 * Put the marker in front of the camera so it can be detected and beamed by the projection
 * On rojection window click at the center of the marker as you move it for at least 6 times.
 *
 * by Anderson Sudario, 2015
 *
 */

import jp.nyatla.nyar4psg.*;
import processing.video.*;
import gab.opencv.*;

import org.opencv.core.Mat;
import org.opencv.core.CvType;
import org.opencv.core.Core;

import java.awt.Frame;
import java.awt.BorderLayout;

int maxCPoints = 10; //min required is 6;

Mat A, B, P;
PVector [] p3d = new PVector[maxCPoints]; //stores 3D points in space on camera coord sys.
PVector [] p2dp = new PVector[maxCPoints];//stores projected 2D points correspondent to p3d position

ControlFrame projector;//new frame window for projection

Capture camera;
MultiMarker nya;
OpenCV opencv;
int markerSize = 70;
float tx, ty, tz, rx, ry, rz; //position/orientation of AR marker

boolean calibrationMode = true;
boolean allCPselected = false; //all 6 correspondent points selected
int currentCPoint = 0; //current correspondent point in calibration mode


//projection window setting
ControlFrame addControlFrame() {
  Frame f = new Frame("projectorWindow");
  ControlFrame p = new ControlFrame(this, 800, 600); //projector resolution
  f.add(p);
  p.init();
  f.setUndecorated(true); //hides title bar. Can I hide/show it dynamicaly?
  f.setSize(p.w, p.h);
  f.setLocation(1680, 0); //1680 is the main window resolution
  f.setResizable(false);
  f.setVisible(true);
  return p;
}


void setup() {

  //control window
  size(640, 480, P3D);

  opencv = new OpenCV(this, 800, 600);
  camera = new Capture(this, width, height);
  camera.start();

  nya=new MultiMarker(this, width, height, "camera_para.dat", NyAR4PsgConfig.CONFIG_PSG);
  nya.setLostDelay(1);
  nya.addARMarker("patt.kanji", markerSize);

  //dummy values to initiate variables
  for (int i = 0; i < maxCPoints; i++) {
    p2dp[i] = new PVector(0, 0);
    p3d[i] = new PVector(0, 0, 0);
  }

  //projector window
  projector = addControlFrame();
}



public void draw() {  

  if (!camera.available()) {
    fill(0);
    rect(0, 0, width, 25);
    fill(255);
    textSize(12);
    textAlign(CENTER);
    text("NO CAMERA AVAILABLE", 320, 17);
    return;
  }

  //detect marker and set background from camera input
  nya.detect(camera);
  background(camera);

  if ((!nya.isExistMarker(0))) {
    fill(0);
    rect(0, 0, width, 25);
    fill(255);
    textSize(12);
    textAlign(CENTER);
    text("NO MARKER AVAILABLE", 320, 17);
    return;
  }

  getInfo();
  showInfo();
}



void getInfo() {
  //get marker 3D information
  float ax = nya.getMarkerMatrix(0).m00;
  float ay = nya.getMarkerMatrix(0).m10;
  float az = nya.getMarkerMatrix(0).m20;
  float bz = nya.getMarkerMatrix(0).m21;
  float cz = nya.getMarkerMatrix(0).m22;
  tx = nya.getMarkerMatrix(0).m03;
  ty = nya.getMarkerMatrix(0).m13;
  tz = nya.getMarkerMatrix(0).m23;  
  rx = degrees(atan2(bz, cz));
  ry = degrees(atan2(-az, sqrt( pow(bz, 2)+pow(cz, 2) ) ) );
  rz = degrees(atan2(ay, ax));

  if (calibrationMode) {
    fill(0);
    rect(0, 0, width, 25);
    fill(255);
    textSize(12);
    textAlign(CENTER);
    text("CALIBRATION MODE", 320, 17);
  }
}

void showInfo() {
  noStroke();
  fill(0, 0, 0, 100);
  rect(0, height-80, 250, height);
  fill(255);
  textAlign(LEFT);
  textSize(12);
  text("press c for calibration mode", 10, height - 50);
  text(String.format("rx = %.2f, ry = %.2f, rz = %.2f", rx, ry, rz), 10, height - 35);
  text(String.format("tx = %.1f, ty = %.1f, tz = %.1f", tx, ty, tz), 10, height - 20);
  stroke(0, 0, 255);
  line(0, 0, width, height);
  line(0, height, width, 0);
  noStroke();
  text("FPS: " + (int)frameRate, 10, height - 5);

  scale( width/640, height/480);

  //vertexes info
  textSize(10);
  for (int i=0; i < 1; i++ ) { 
    PVector[] pos2d = nya.getMarkerVertex2D(i);
    for (int j=0; j < pos2d.length; j++ ) {
      String s = "(" + int(pos2d[j].x) + "," + int(pos2d[j].y) + ")";
      fill(255);
      rect(pos2d[j].x, pos2d[j].y - textAscent()/2, textWidth(s) + 3, textAscent() + textDescent());
      fill(0);
      text(s, pos2d[j].x + 2, pos2d[j].y + 2);
      fill(255, 0, 0);
      ellipse(pos2d[j].x, pos2d[j].y, 5, 5);
    }
  }
}

void resetCalibration() {
  allCPselected = false;
  calibrationMode = true;
  currentCPoint = 0;
}


void keyPressed() {
  if (key == 'c') {
    resetCalibration();
  }
}


void calculateMatrix() {
  //setting matrices A and B
  double [] tmpA = new double[11 * maxCPoints * 2];
  int j = 0;
  for (int i = 0; i < maxCPoints; i++) { 
    float X, Y, Z, x, y;
    //println(i);
    X = p3d[i].x; 
    Y = p3d[i].y;
    Z = p3d[i].z;
    x = p2dp[i].x;
    y = p2dp[i].y;

    double[] Cx = {
      X, Y, Z, 1, 0, 0, 0, 0, -X*x, -Y*x, -Z*x
    };
    for (int k = 0; k < 11; k++) {
      tmpA[j] = Cx[k];
      j++;
    }
    double[] Cy = { 
      0, 0, 0, 0, X, Y, Z, 1, -X*y, -Y*y, -Z*y
    };
    for (int k = 0; k < 11; k++) {
      tmpA[j] = Cy[k];
      j++;
    }
  }
  int row = 0, col = 0;
  Mat a = new Mat(maxCPoints*2, 11, CvType.CV_64F);
  a.put( row, col, tmpA );
  j=0;


  double [] tmpB = new double[maxCPoints*2];
  for (int i = 0; i < maxCPoints; i++) {
    tmpB[j] =  p2dp[i].x;
    j++;
    tmpB[j] =  p2dp[i].x;
    j++;
  }
  Mat b = new Mat(maxCPoints*2, 1, CvType.CV_64F);
  b.put( 0, 0, tmpB );

  //creating empty to hold next operations
  A = new Mat();
  B = new Mat();
  P = new Mat();


  //below is this equation C = ( At * A ).inv() * ( At * B ):
  Core.gemm(a.t(), a, 1, new Mat(), 0, A); //<- a.transp * a ; 1 = alpha, null, flag off, matrix to hold result
  Core.gemm(a.t(), b, 1, new Mat(), 0, B);
  Core.gemm(A.inv(1), B, 1, new Mat(), 0, P);

  //adding value to equalize row and column then reshape
  Mat z = new Mat(1, 1, CvType.CV_64F);
  z.put(0, 0, 1);
  P.push_back(z);
  P = P.reshape(1, 3);

  calibrationMode = false;
}



//-------------------------------------------------------------------------------



public class ControlFrame extends PApplet {
  int w, h;
  float fx, fy;
  Object parent;

  public void setup() {
    size(w, h, P3D);
    //frameRate(25);
  }

  public void draw() {
    background(30);

    if (calibrationMode) {
      drawCalibrationProjectionLines();
    } else {
      if (nya.isExistMarker(0)) {
        drawProjectionMappingCross();
      }
    }
  }


  void drawCalibrationProjectionLines() {
    stroke( (nya.isExistMarker(0))?#666666:#FF0000 );
    strokeWeight(3);
    line(0, mouseY, w, mouseY);
    line(mouseX, 0, mouseX, h);
    fill(50);
    rect(10, 10, w-20, h-20);
    noStroke();

    fill( (nya.isExistMarker(0))?#666666:#FF0000 );
    textSize(10);
    textAlign(CENTER, CENTER);
    text( currentCPoint + " ("+ mouseX + ","+ (mouseY)+")", (mouseX < w/2)? mouseX + 30 : mouseX - 30, (mouseY > h/2)? mouseY - 10 : mouseY +10);

    for ( int i = 0; i < currentCPoint; i++ ) {
      textAlign(LEFT, TOP);
      text( i +": " + p2dp[i] + " : " + p3d[i], 20, (11 * i) + 35);
    }
  }

  void drawProjectionMappingCross() {
    double [] tmp3D = new double[] { 
      tx, ty, tz, 1
    };
    int row = 0, col = 0;
    Mat new3D = new Mat(1, 4, CvType.CV_64F);
    new3D.put( row, col, tmp3D );

    Mat XY = new Mat();

    Core.gemm(P, new3D.t(), 1, new Mat(), 0, XY);

    float px = (float) XY.get(0, 0)[0];
    float py = (float) XY.get(1, 0)[0];

    stroke(0, 70, 0);
    line(px, 0, px, h);
    line(0, py, w, py);

  }

  void mousePressed( ) {
    if (!allCPselected && nya.isExistMarker(0) ) {
      p2dp[currentCPoint] = new PVector(mouseX, mouseY);
      p3d[currentCPoint] = new PVector(tx, ty, tz);

      currentCPoint++;
      if (currentCPoint>=p2dp.length)allCPselected = true;
    }
  }

  void mouseReleased() {
    if ( allCPselected ) {
      print(currentCPoint);
      calculateMatrix();
    }
  }


  void keyPressed() {
    if (key == 'c') {
      resetCalibration();
    }
  }



  private ControlFrame() {
  }

  public ControlFrame(Object theParent, int theWidth, int theHeight) {
    parent = theParent;
    w = theWidth;
    h = theHeight;
  }
}
 [-O< 

Control ON/OFF led with kinect?

$
0
0

Hi there, anyone can help me on kinect to control quadcopter using processing?

Kinect, Windows

$
0
0

I was reading this shiffman.net/p5/kinect/ and was about to purchase a Kinect when I realized it seems from the tutorial that the Kinect won't work with Windows, or at least with Processing+Windows. Is this true? Does it work with Linux?

[SOLVED] Attractor (x, y) controlled by hand position. SimpleOpenNI, Kinect.

$
0
0

Hi, I am looking to use the the (x,y) position of my hand to control the (x, y) position of an attractor in my sketch, so wherever the user moves their hand, the on screen particles are repelled towards/away.

I have not been coding for very long and am unsure of how to do this. I thought I had it a couple times but nothing seems to be working.

For a better idea of how i want the code to work, change "myAttractor.x = mapHandVec.x;" to "myAttractor.x = MouseX;" same with "myAttractor.y = MouseY;".

Both libraries used are available for download from within the processing library menu.

Any help/replies would be super appreciated!

Thanks in advance,

Ross

Windows 8 Processing 2.2.1


code page 1:

class Attractor {
  // position
  float x=0, y=0;

  // radius of impact
  float radius = 200;
  // strength: positive for attraction, negative for repulsion
  float strength = 1; 
  // parameter that influences the form of the function
  float ramp = 0.5;    //// 0.01 - 0.99


  Attractor(float theX, float theY) {
    x = theX;
    y = theY;
  }


  void attract(Node theNode) {
    // calculate distance
    float dx = x - theNode.x;
    float dy = y - theNode.y;
    float d = mag(dx, dy);

    if (d > 0 && d < radius) {
      // calculate force
      float s = pow(d / radius, 1 / ramp);
      float f = s * 9 * strength * (1 / (s + 1) + ((s - 3) / 4)) / d;

      // apply force to node velocity
      theNode.velocity.x += dx * f;
      theNode.velocity.y += dy * f;
    }
  }

}

Code Page 2:

import generativedesign.*;
import SimpleOpenNI.*;

SimpleOpenNI context;

PVector handVec = new PVector();
PVector mapHandVec = new PVector();

// initial parameters
int xCount = 70;
int yCount = 70;
float gridSize = 600;

// nodes array
Node[] myNodes = new Node[xCount*yCount];

// attractor
Attractor myAttractor;


// image output
boolean saveOneFrame = false;
boolean saveToPrint = false;

void setup() { 

  context = new SimpleOpenNI(this);
  context.setMirror(true);
  context.enableDepth();
  context.enableHand();

  context.startGesture(SimpleOpenNI.GESTURE_WAVE);

  size(640,640);

  // setup drawing parameters
  colorMode(RGB, 255, 255, 255, 100);
  smooth();
  noStroke();
  fill(0);

  cursor(CROSS);

  // setup node grid
  initGrid();

  // setup attractor
  myAttractor = new Attractor(0, 0);
  myAttractor.strength = -3;
  myAttractor.ramp = 2;
}

//end void setup

void draw() {

  context.update();
  context.convertRealWorldToProjective(handVec,mapHandVec);

 //trying to map values to attractor (very unsuccessfully)

  myAttractor.x = mapHandVec.x;
  myAttractor.y = mapHandVec.y;

  for (int i = 0; i < myNodes.length; i++) {
      myAttractor.attract(myNodes[i]);

    myNodes[i].update();

    // draw nodes

    if (saveToPrint) {
      ellipse(myNodes[i].x, myNodes[i].y, 1, 1);
      if (i%1000 == 0) {
        println("saving to pdf - step " + int(i/1000 + 1) + "/" + int(myNodes.length / 1000));
      }
    }
    else {
      rect(myNodes[i].x, myNodes[i].y, 1, 1);
    }
  }
}

  //end void draw

void initGrid() {
  int i = 0;
  for (int y = 0; y < yCount; y++) {
    for (int x = 0; x < xCount; x++) {
      float xPos = x*(gridSize/(xCount-1))+(width-gridSize)/2;
      float yPos = y*(gridSize/(yCount-1))+(height-gridSize)/2;
      myNodes[i] = new Node(xPos, yPos);
      myNodes[i].setBoundary(0, 0, width, height);
      myNodes[i].setDamping(0.8);  //// 0.0 - 1.0
      i++;
    }
  }
}

//end void initGrid


void keyPressed() {
  if (key=='r' || key=='R') {
    initGrid();
    background(230);
  }
}

void onCreateHands(int handId, PVector pos, float time)
{
  println("onCreateHands - handId: " + handId + ", pos: " + pos + ", time:" + time);
  handVec = pos;
}

void onUpdateHands(int handId, PVector pos, float time)
{
  println("onUpdateHandsCb - handId: " + handId + ", pos: " + pos + ", time:" + time);
  handVec = pos;
}
void onRecognizeGesture(String strGesture, PVector idPosition, PVector endPosition)
{
  println("onRecognizeGesture - strGesture: " + strGesture + ", idPosition: " + idPosition + ", endPosition:" + endPosition);

  context.endGesture(SimpleOpenNI.GESTURE_WAVE);
  context.startTrackingHand(endPosition);
}

How to change the range of PWM value

$
0
0

Hello there! I'm doing some project using kinect + processing + arduino. Fyi I am so so new with this software. Here i have code that control the brightness of led using hand movement that i get from internet. But the problem is i don understand where to initialize the first value in frame that construct by processing?

import processing.serial.*; import java.util.Map; import java.util.Iterator; import SimpleOpenNI.*; import processing.serial.*; SimpleOpenNI context; Serial myPort;

int handVecListSize = 20; Map<Integer,ArrayList> handPathList = new HashMap<Integer,ArrayList>(); color[] userClr = new color[]{ color(255,0,0), color(0,255,0), color(0,0,255), color(255,255,0), color(255,0,255), color(0,255,255) };

void setup() { // frameRate(200); size(640,480); context = new SimpleOpenNI(this);

if(context.isInit() == false) { println("Can't init SimpleOpenNI, maybe the camera is not connected!"); exit(); return;
}

// enable depthMap generation context.enableDepth();

// disable mirror context.setMirror(true);

// enable hands + gesture generation //context.enableGesture(); context.enableHand(); context.startGesture(SimpleOpenNI.GESTURE_WAVE); String portName = Serial.list()[0]; // This gets the first port on your computer. myPort = new Serial(this, portName, 9600);

// set how smooth the hand capturing should be //context.setSmoothingHands(.5); }

void draw() { // update the cam context.update();

image(context.depthImage(),0,0);

// draw the tracked hands if(handPathList.size() > 0)
{
Iterator itr = handPathList.entrySet().iterator();
while(itr.hasNext()) { Map.Entry mapEntry = (Map.Entry)itr.next(); int handId = (Integer)mapEntry.getKey(); ArrayList vecList = (ArrayList)mapEntry.getValue(); PVector p; PVector p2d = new PVector();

    stroke(userClr[ (handId - 1) % userClr.length ]);
    noFill(); 
    strokeWeight(1);        
    Iterator itrVec = vecList.iterator(); 
    beginShape();
      while( itrVec.hasNext() ) 
      { 
        p = (PVector) itrVec.next(); 

        context.convertRealWorldToProjective(p,p2d);
        vertex(p2d.x,p2d.y);
      }
    endShape();   

    stroke(userClr[ (handId - 1) % userClr.length ]);
    strokeWeight(4);
    p = vecList.get(0);
    context.convertRealWorldToProjective(p,p2d);
    point(p2d.x,p2d.y);

      myPort.write('S');

// Send the value of the mouse's x-position myPort.write(int(255* p2d .x/width)); // Send the value of the mouse's y-position myPort.write(int(255* p2d.y/height));

}        

} }

// ----------------------------------------------------------------- // hand events

void onNewHand(SimpleOpenNI curContext,int handId,PVector pos) { println("onNewHand - handId: " + handId + ", pos: " + pos);

ArrayList vecList = new ArrayList(); vecList.add(pos);

handPathList.put(handId,vecList); }

void onTrackedHand(SimpleOpenNI curContext,int handId,PVector pos) { //println("onTrackedHand - handId: " + handId + ", pos: " + pos );

ArrayList vecList = handPathList.get(handId); if(vecList != null) { vecList.add(0,pos); if(vecList.size() >= handVecListSize) // remove the last point vecList.remove(vecList.size()-1); }
}

void onLostHand(SimpleOpenNI curContext,int handId) { println("onLostHand - handId: " + handId); handPathList.remove(handId); }

// ----------------------------------------------------------------- // gesture events

void onCompletedGesture(SimpleOpenNI curContext,int gestureType, PVector pos) { println("onCompletedGesture - gestureType: " + gestureType + ", pos: " + pos);

int handId = context.startTrackingHand(pos); println("hand stracked: " + handId); }

// ----------------------------------------------------------------- // Keyboard event void keyPressed() {

switch(key) { case ' ': context.setMirror(!context.mirror()); break; case '1': context.setMirror(true); break; case '2': context.setMirror(false); break; } }

void onRecognizeGesture(String strGesture, PVector idPosition, PVector endPosition) { // SimpleOpenNI.GESTURE_HAND_RAISE //endPosition context.endGesture(SimpleOpenNI.GESTURE_HAND_RAISE ); context.startTrackingHand(endPosition); // context.startGesture(SimpleOpenNI.GESTURE_WAVE); }

Kinect v.2 working on mac ?


Multiple kinects in simpleopenni 0.27 and simpleopenni 1.96

$
0
0

Hi.

I have a problem with simpleopenni 0.27, it doesn't start multicam.pde, it marks that can't open the depth map, that maybe the camera is not connected, but when I use simpleopenni 1.96, multicam.pde runs without problems, and I was wondering, why? I run them in a laptop, and after checking the code of both codes, the only thing that comes to my mind is that 1.96 is the newer version and comes with new things, like multithread (I assume this allows both kinects running in the same usb hub, but i'm not sure). Does somebody had the same issues with simpleopenni 0.27? how do you fixed it?

Thanks in advance.

Greetings.

How to export Video in good quality in Point Cloud mode?

$
0
0

Hi, I am fresh in Processing and Kinect, but I played little bit with Point Cloud sketch. I would like to export this point cloud preview as video. I tried to find tutorial on youtube and google but I coudn't find solution for that. What code I should write to export that preview? The oryginal Sketch is:

        /*
        Copyright (C) 2014  Thomas Sanchez Lengeling.
         KinectPV2, Kinect for Windows v2 library for processing
        */

        import java.nio.FloatBuffer;

        import KinectPV2.*;
        import javax.media.opengl.GL2;

        private KinectPV2 kinect;

        float a = 0;
        int zval = 50;
        float scaleVal = 260;

        //Distance Threashold
        float maxD = 4.0f; //meters
        float minD = 1.0f;

        public void setup() {
          size(1366, 768, P3D);

          kinect = new KinectPV2(this);
          kinect.enableDepthImg(true);
          kinect.enablePointCloud(true);
          kinect.activateRawDepth(true);

          kinect.setLowThresholdPC(minD);
          kinect.setHighThresholdPC(maxD);

          kinect.init();
        }

        public void draw() {
          background(0);

          //image(kinect.getDepthImage(), 0, 0, 320, 240);

          //Threahold of the point Cloud.
          kinect.setLowThresholdPC(minD);
          kinect.setHighThresholdPC(maxD);

          FloatBuffer pointCloudBuffer = kinect.getPointCloudDepthPos();

          PJOGL pgl = (PJOGL)beginPGL();
          GL2 gl2 = pgl.gl.getGL2();

          gl2.glEnable( GL2.GL_BLEND );
          gl2.glEnable(GL2.GL_POINT_SMOOTH);      

          gl2.glEnableClientState(GL2.GL_VERTEX_ARRAY);
          gl2.glVertexPointer(3, GL2.GL_FLOAT, 0, pointCloudBuffer);

          gl2.glTranslatef(width/2, height/2, zval);
          gl2.glScalef(scaleVal, -1*scaleVal, scaleVal);
          gl2.glRotatef(a, 0.0f, 1.0f, 0.0f);

          gl2.glDrawArrays(GL2.GL_POINTS, 0, kinect.WIDTHDepth * kinect.HEIGHTDepth);
          gl2.glDisableClientState(GL2.GL_VERTEX_ARRAY);
          gl2.glDisable(GL2.GL_BLEND);
          endPGL();

          stroke(255, 0, 0);
          text(frameRate, 50, height- 50);
        }

        public void mousePressed() {

          println(frameRate);
         // saveFrame();
        }

        public void keyPressed() {
          if (key == 'a') {
            zval +=1;
            println(zval);
          }
          if (key == 's') {
            zval -= 1;
            println(zval);
          }

          if (key == 'z') {
            scaleVal += 0.1;
            println(scaleVal);
          }
          if (key == 'x') {
            scaleVal -= 0.1;
            println(scaleVal);
          }

          if (key == 'q') {
            a += 1;
            println(a);
          }
          if (key == 'w') {
            a -= 1;
            println(a);
          }

          if (key == '1') {
            minD += 0.01;
            println("Change min: "+minD);
          }

          if (key == '2') {
            minD -= 0.01;
            println("Change min: "+minD);
          }

          if (key == '3') {
            maxD += 0.01;
            println("Change max: "+maxD);
          }

          if (key == '4') {
            maxD -= 0.01;
            println("Change max: "+maxD);
          }

          if (key == '2') {
            minD -= 0.01;
            println("Change min: "+minD);
          }

          if (key == '3') {
            maxD += 0.01;
            println("Change max: "+maxD);
          }

          if (key == '4') {
            maxD -= 0.01;
            println("Change max: "+maxD);
          }
        }

Kinect v.2 on Mac - Device connection failure

$
0
0

Hello! I just got a hold of a Kinect v.2 and I'm trying out Shiffman article on the subject: http://shiffman.net/p5/kinect/

I'v copy pasted this code now just to see if the connections are ok, but I am getting an error that I am unsure of. I am currently on an MacBook Pro 2011 that doesn't have USB 3, but I can still connect USB 3 devices on it. Could this be related to that?

... Init Kinect2 [Freenect2Impl] enumerating devices... [Freenect2Impl] 7 usb devices connected [Freenect2Impl] found valid Kinect v2 @250:7 with serial 035701545147 [Freenect2Impl] found 1 devices 1 Device Connected! [OpenCLDepthPacketProcessor::listDevice] devices: 0: Intel(R) Core(TM) i7-2820QM CPU @ 2.30GHz (CPU)[Intel] 1: ATI Radeon HD 6750M (GPU)[AMD] [OpenCLDepthPacketProcessor::init] selected device: ATI Radeon HD 6750M (GPU)[AMD] Devce: 0 [Freenect2Impl] enumerating devices... [Freenect2Impl] 7 usb devices connected [Freenect2Impl] found valid Kinect v2 @250:7 with serial 035701545147 [Freenect2Impl] found 1 devices [Freenect2DeviceImpl] opening... [Freenect2DeviceImpl] closing... [Freenect2DeviceImpl] deallocating usb transfer pools... [Freenect2DeviceImpl] closing usb device... [UsbControl::claimInterfaces(IrInterfaceId)] failed! libusb error -3: LIBUSB_ERROR_ACCESS [Freenect2DeviceImpl] closed [Freenect2DeviceImpl] failed to open Kinect v2 @250:7! no device connected or failure opening the default one!

It seems that the Kinect is recognised ok, but I still get: ... [UsbControl::claimInterfaces(IrInterfaceId)] failed! libusb error -3: LIBUSB_ERROR_ACCESS ... no device connected or failure opening the default one!

Just need help narrowing down my search as I don't have any other hardware to test it on. Is it just the USB3 problem?

Thanks, Franck :)

Kinect for Windows V2 Library for Processing

$
0
0

Hey.

I just started to developing a Kinect One library for processing. The version uses the Kinect one SDK beta (K2W2), so it only works for windows ): .

You can get the current version is still beta.

https://github.com/ThomasLengeling/KinectPV2

screen-1852

I have only tested on my machine, so please send me your comments and suggestions.

It currently only support color image capture, depth and infrared capture. In the coming weeks I'll be adding features like skeleton tracking, points cloud, user tracking. Also the K2W2 is still on beta form, so I will be updating the library in the next couple of weeks.

Thomas

Need positional data of the edge or contour of a Kinect Silhoutte ?

$
0
0

Hi,

I am working on a personal project where I want to grow grass and plants from the contour of the kinect silhouttee. I want a list of coordinattes only along the edge from where I can emit grass as particles.How can I do this?

I am running a Kinect 1 on Mac OS using SimpleOpenNI 1.96

Please help if you have any idea of how to achieve this result.

Thanks.

Viewing all 530 articles
Browse latest View live