Quantcast
Channel: Kinect - Processing 2.x and 3.x Forum
Viewing all 530 articles
Browse latest View live

Track people standing within a certain distance from the kinect (kinectPV2)

$
0
0

I am trying to use skeleton tracking to only track people closest to the kinect, so that people in the background do not affect my script. Is there any way to set a filter such that only people within a certain distance from the kinect are tracked? I am using the Kinect pv2 library.

Thank you!


Reduce PointCloud Resolution in KinectPV2

$
0
0

Anyone have a solution to reducing the number of points read+rendered in Lengeling's fantastic KinectPV2 library/sketches? Love to have 10% of the native rez/points. Using Kinect v2 in sketch based on his PointCloudOGL. Ideally not having to convert data to x,y,z points for efficiency, but whatever it takes. Losing my mind but not the points. -hanx!

Track color within a certain distance (Mix color tracking and depth?)

$
0
0

Hi, I've been experimenting with Processing 3 for the past weeks (Checking video from Daniel Shiffman) And now, I'm trying to do something I'm not even sure it can be done easily.

I want to know if I can mix some color tracking and a distance threshold. My plan is to have 3 objects of differents color, and people can use them to "paint" on a glass with the result showing on a wall (Must absolutly use the physical object because it's an art project).

But the problem is, people could have clothes with the same color as the object, so we want to implement some kind of minimal distance for the color detection. I already try minimal threshold distance with Kinect, but I wonder if I can mix them both? It's quite specific I'll admit, but maybe someone as an idea? Or other solution?

Thanks!

(For clarification, the painting part is not done by me, I'll just sent the position of the object to a server)

How can I save the previous state of an array, to recall in the next loop?

$
0
0

I am trying to connect lines between two x and y values - one current set of values, and one previous set of values. These x/y values are determined by centre of mass using a kinect lib, and so I can sometimes receive multiple x/y values at once (which is why i'm storing them in an array in each loop).

Currently I am getting an array which looks something like this:

storeX[]
[0]5672
[1]4352
[2]4262

which is what I want.

But what I now need to do, is save this for one loop, so that in the next loop, I have the new array, plus this array from the previous loop. (So I can then draw a line between the x/y values for each user detected.)

I have tried to replicate the process I used to save the current x/y values (shown in code below) but this is giving me some crazy results, and sometimes values in the array just set to 0.0.

Is it possible to somehow copy the original array, but delay it for one loop?

//imports Kinect lib import SimpleOpenNI.*;

//defines variable for kinect object
SimpleOpenNI kinect;

//declare variables for mapped x y values
float x, y, pX, pY;

//declare variable for number of people
float numPeople = 0;

//initialise other variables
int userId;
float inches;
int count = 0;

void setup() {

  //set the display size to full screen
  size(displayWidth, displayHeight);

  //declares new kinect object
  kinect = new SimpleOpenNI(this);

  //enable depth image
  kinect.enableDepth();

  //enable user detection
  kinect.enableUser();

 // frameRate(1);
 background(255);
}

void draw() {

  //updates depth image
  kinect.update();

  //access all users currently available to us
  IntVector userList = new IntVector();
  kinect.getUsers(userList);

  numPeople = userList.size();

  //CREATE ARRAYS TO STORE CURRENT X AND Y VALUES
  float[] storeX = new float[int(numPeople)];
  float[] storeY = new float[int(numPeople)];


  //CREATE ARRAYS TO STORE PREV X AND Y VALUES
  float[] pastX = new float[int(numPeople)];
  float[] pastY = new float[int(numPeople)];

  //for every user detected, do this
  for (int i = 0; i<userList.size (); i++) {

    userId = userList.get(i);

    //declare PVector position to store position
    PVector position = new PVector();

    //get the position
    kinect.getCoM(userId, position);
    kinect.convertRealWorldToProjective(position, position);



    //SET THE PREV X AND Y VAL TO THE CURRENT VAL OF X AND Y
    pX = x;
    pY = y;
    //LOAD THESE INTO ARRAY
    pastX[i] = pX;
    pastY[i] = pY;

    //map x and y coordinates
    x = map(position.x, 0, 640, 0, displayWidth);
    y = map(position.y, 0, 480, 0, displayHeight);

    //store current values in arrays
    storeX[i] = x;
    storeY[i] = y;
    storeDepth[i] = inches;



   //for the amount of x/y values we have, draw a circle on each one based on the array values
   for (int j = 0; j < storeX.length; j++) {

      line(pastX[j],pastY[j],storeX[j],storeY[j]);

    }

  }
}

Face detection - OpenCV: If clauses to act when no face is detected - HELP PLEASE!

$
0
0

Hi there,

I'm working on a face detection code that will put a square around your face when its detected, but will display text or an image when no face is detected on screen. Ideally I'd like it to respond after a couple of second of no detection but anything is better than nothing!

I currently have the face detection code working. But when trying to add an if clause for null faces make an action - it doesn't work when I run the code - any help is greatly appreciated.

Here is the current code:

import gab.opencv.*; import processing.video.*; import java.awt.Rectangle;

Capture cam; OpenCV opencv; Rectangle[] faces;

void setup() { size(640, 480, P2D); background (0, 0, 0); cam = new Capture( this, 640, 480, 30); cam.start(); opencv = new OpenCV(this, cam.width, cam.height); opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); }

void draw() { opencv.loadImage(cam); faces = opencv.detect(); image(cam, 0, 0); if (faces!=null) { for (int i=0; i< faces.length; i++) { noFill(); stroke(255, 255, 0); strokeWeight(10); rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); } } if (faces == null) { textAlign(CENTER); fill(255, 0, 0); textSize(56); text("UNDETECTED", 100, 100); } }

void captureEvent(Capture cam) { cam.read(); }

Anyone used the OpenCV library to set camera brightness /contrast?

$
0
0

Not talking about filtering image , supposedly OpenCV has commands to set brightness , contrast and other hardware setting if they are supported by camera or grabber device in Device Manager Want to use OpenCV library to set these parameters from inside program in Setup phase

opencv library

$
0
0

THere are different opencv libs than the contributed one for processing. cv2 hypermedia i see in examples for opencv opencv.images() gives error in processing : the function image() does not exist.

I am trying to get mean of images after setting brightness with opencv, but mean always the same. How do i read resulant image from opencv brightness call?

opencv.image.pixels[x] gives error as well!

Do i need to import a different opencv library? cv2 or hypermedia? Is the openCV contributed lib the only one that works with processing ?

Thanks!

import gab.opencv.*;

int x,n; float sum,sampix,mean=0,oldmean=100;

PImage img,resimg; OpenCV opencv;

void setup(){

sampix=640*480; img = loadImage("refimg.png"); size(1080, 720); opencv = new OpenCV(this, img);

}

void draw(){ opencv.loadImage(img); opencv.brightness((int)map(mouseX, 0, width, -255, 255));

imgmean(); fill(250); rect(700,0,100,100); fill(0); text((int)map(mouseX, 0, width, -255, 255),710,20); text("MEAN="+mean,710,50);

image(opencv.getOutput(),0,0); }

/*********************************************************** IMGMEAN *************************************************/ void imgmean() {

x=n=0; sum=0; oldmean=mean; for(x=0;x<sampix;x++) { sum+=green(resimg.pixels[x]); n++; }

mean=sum/n;

}

http://myhealthpeak.com/verutum-rx/

$
0
0

The reply to avoid these magic tablets that will make you gradually centered on them is to get fit and also be fit. Work out, or I should say weight lifting, is your best aphrodisiac. First, it is one terrible verutum rx a work-out to have verutum rx. In one hour verutum rx excessive verutum rx-related experience, you can burn up to 400 calories, if you move the right way. If you expect your ex to go on top and drive you like a wild stallion, you are in for a big surprise. You need to be the one who moves like the animal. When you start sensation your hands, legs, coming back, and stomach muscular tissues, that mean you are getting the right work-out. In most verutum rx-related positions, these are the muscular tissues you'll get involved with the most. So if they get flabby, verutum rx well, the penis will go with it. So if you do not have the the opportunity to go to the gym and force iron, then try maintaining health and fitness by training like chin-ups, near hold push-ups, and sit-ups. Aerobic vascular health and fitness cannot be left aside either. If your aerobic sucks, then you'll gradually have problems in the region verutum rx getting enough veins down there and getting the necessary help to try out all evening. Exercising aerobically 4-5 periods per 7 days can be beneficial for your libido and health and fitness, so make sure that to take outstanding excellent proper that when going to the gym. If you exit out the aerobic, then you annually when having verutum rx, and when you'll start gasping for air at each throb, that can give you a sign to the brain that you are exhausted and will gradually send another message to your artillery that you need rest, which is exactly what the penis will do. Eating can actually be a big component verutum rx the man development problems, as effectively used in medical proper care terminology. If you eat crap (includes unhealthy food, junk, sweets, etc), you'll gradually has it when it comes to maintaining stronger, or even trying to get one. A diet low in fat (asta la vista Atkins), and fiber wealthy will help you treat any verutum rx development problems. Fresh arteries are the perfect remedy is to outstanding health and health and fitness. Having low LDL (Bad cholesterol) will considerably enhance your stamina by having a specific veins flow in the necessary places around the genital place, plus the center will not need to keep working more difficult to get you challenging. Before to females in bed, you need first to wine and dine her, which can be romantic. Wining too much can be detrimental, yes, it is fun and you feel hornier being drunk, but when time comes to bring out, well, you'll be experiencing discuss again with the same sequence. Having consumed too much can also perform against you right at that moment much needed. You will experience sluggish and heavy and won't be willing to bring out either. Make sure you avoid dinner loaded with garlic or beans.

http://myhealthpeak.com/verutum-rx/


Kinect v2 How to shorten the cable?

$
0
0

On the Kinect2. It is great and there are nice libraries for it.

But that cable is so bad... Way too long and too many proprietary connectors. For installation work or tight spaces it is real bad. Anyone who has used one must've come to to a similar conclusion.

Has anyone seen, crafted, or heard of a good way to make it shorter or use less parts? Or even use more standard connectors?

Is there a developer edition that is easier to use?

It would be great to publish a way to do this since there doesn't seem to be one out there yet.

Export .obj from 3d slitscan mesh

$
0
0

Hey guys, Hopefully someone here can help out. I've been playing around with this kinect and 3d slitscan sketch by Jagracar that can be found here: image I need someone that can help develop the sketch to allow exporting of the generated meshes into a 3d format which will ultimately allow 3d printing. I'm more than willing to pay someone for their time if they are able to help make this happen. I have spoken to the original author who is unfortunately too busy, but he suggested I should try here to see if someone is interested in helping. Please let me know and thanks in advance!

could not run the sketch (target VN failed to initialize)

$
0
0

hi there, I'm running windows 10, with an AMD radeon graphics card, latest version of processing. when running my sketch i get the error message "could not run the sketch (target VN failed to initialize)" and also a windows firewall notification, that it blocks some java functions. Again, if I allow access, I still get the error. This was a fully functioning sketch before. Any idea? Thanks

Newbie Question Kinect (1520) with mac osx YOSEMITE

$
0
0

Greetings creative Souls. I like to buy the Kinect v2 (1520) sensor and connect it with my MBP. I have read an interesting article on http://perfectminutegames.com and i am interested to give it a go! Do i have to buy the adapter in order to make the connection or is there a connection cable in the box? Are there other ways to connect the sensor with my Mac?

Thank you in advance Alex Specs lost-luggage.gr

How to avoid global variables (objects, actually) while importing libraries?

$
0
0

Rookie question here (so maybe the question is fundamentally wrong.)

If I'm using a library, and I only want a single object from it to be used in a class in my sketch, how should I organize my code?

Let's assume I'm using Kinect libraries, and I have a class that makes use of it. The common approaches are the following ones:

import org.openkinect.processing.*;
Kinect kinect;
Game game;

void setup() {
  kinect = new Kinect(this);
  game = new Game();
}


class Game {
  // code calling methods from the kinect object
}

or

import org.openkinect.processing.*;
Kinect kinect;
Game game;

void setup() {
  kinect = new Kinect(this);
  game = new Game(kinect);
}


class Game {
  Kinect kinect;
  Game (Kinect kinect) {
    this.kinect = kinect;
  }
  // code calling methods from the kinect object
}

Is it possible to initialize the object and make the Game class the only one that can see it? I've thought of:

import org.openkinect.processing.*;
Game game;

void setup() {
  game = new Game(this);
}


class Game {
  Kinect kinect;
  Game (PApplet parent) {
    kinect = new Kinect(parent);
  }
  // code calling methods from the kinect object
}

But it looks really weird to me.

How to improve OpenCV performance on ARM?

$
0
0

Hi guys I am making a face-tracking Nerf blaster using OpenCV on the Raspberry Pi. I am using a Microsoft LifeCam webcam for capture input and the SoftwareServo class for blaster control. However, my code runs at 1-2 FPS on the Pi (Pi 3 model B). I am currently using scale to improve performance, but the code still runs at 1 FPS. Additionally, the servos are extremely jittery. I am powering the servos using a 2A 5V regulator. The Pi is powered off a 2A USB supply. The grounds are connected. Does anyone know how to improve performance? Maybe a different CV library? Thanks for input! Code: ` import processing.io.*; import gab.opencv.*; import processing.video.*; import java.awt.*;

PImage img; Rectangle[] faceRect;

Capture cam; OpenCV opencv; SoftwareServo panServo; SoftwareServo trigServo;

int widthCapture=320; int heightCapture=240; int fpsCapture=30; int panpos=90; int firePos = 80; int readyPos = 0; long time; int wait = 500;

int targetCenterX; int targetCenterY;

int threshold = 20; int thresholdLeft; int thresholdRight; int moveIncrement = 2;

int circleExpand = 20; int circleWidth = 3;

boolean isFiring = false; boolean isFound = false; boolean manual = false;

void setup() { size (320, 240); frameRate(fpsCapture); background(0); panServo = new SoftwareServo(this); trigServo = new SoftwareServo(this); panServo.attach(17); trigServo.attach(4);

cam = new Capture(this, widthCapture, heightCapture); cam.start();

opencv = new OpenCV(this, widthCapture, heightCapture); opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); }

void draw() { if (millis() - time >= wait) { trigServo.write(readyPos); isFiring = false; } if (isFiring) { trigServo.write(firePos); tint(255, 0, 0); } else { trigServo.write(readyPos); noTint(); } if (cam.available() == true) { cam.read();
img = cam.get();

opencv.loadImage(img);

image(img, 0, 0);
blend(img, 0, 0, widthCapture, heightCapture, 0, 0, widthCapture, heightCapture, HARD_LIGHT);
faceRect = opencv.detect();

}

stroke(255, 255, 255); strokeWeight(1); thresholdLeft = (widthCapture/2)-threshold; thresholdRight = (widthCapture/2)+threshold;

stroke(255, 255, 255, 128); strokeWeight(1); line(thresholdLeft, 0, thresholdLeft, heightCapture); //left line line(thresholdRight, 0, thresholdRight, heightCapture); //right line

if ((faceRect != null) && (faceRect.length != 0)) { isFound = true; //Get center point of identified target targetCenterX = faceRect[0].x + (faceRect[0].width/2); targetCenterY = faceRect[0].y + (faceRect[0].height/2);

//Draw circle around face
noFill();
strokeWeight(circleWidth);
stroke(255, 255, 255);
ellipse(targetCenterX, targetCenterY, faceRect[0].width+circleExpand, faceRect[0].height+circleExpand);
if (!manual) {
  //Handle rotation
  if (targetCenterX < thresholdLeft)
  {
    panpos -=  moveIncrement;
    //delay(70);
  }
  if (targetCenterX > thresholdRight)
  {
    panpos+=  moveIncrement;
    //delay(70);
  }

  //Fire
  if ((targetCenterX >= thresholdLeft) && (targetCenterX <= thresholdRight))
  {
    isFiring = true;
    println("Gotem");
    noFill();
  }
}

} } void keyPressed() { if (key == 'm') { manual = !manual; println("manual mode toggled"); isFiring = false; } else if (key == 'a' && manual) { panpos-= moveIncrement; println("left"); } else if (key == 'f' && manual) { isFiring = !isFiring; } else if (key == 'd' && manual) { panpos+= moveIncrement; println("right"); } else if (key == 'c' ) { panServo.write(90); } else { println(key); } } `

How to distinguish two shapes from another

$
0
0

Hello, I'm trying to decide, if an object looks more like a rectangle or a circle!

I already transformed the image into a binary image and am using the OpenCV libary.

I've tried to work with the contour of the object, but didn't find a way to translate it from OpenCV examples like http://www.pyimagesearch.com/2016/02/08/opencv-shape-detection/ .

If anybody has an idea on where to start or just some code as an example, that would be great!

My Code so far :

import gab.opencv.*;
import processing.video.*;

Capture webCam;
OpenCV finalImgcv, cv2;
PImage finalImg,  temp;
boolean foreGExists = false;
ArrayList<Contour> contours;
Contour M;

void setup(){
  size(640, 480);
  String[] cams = Capture.list();


  webCam = new Capture(this, width, height, cams[0], 30); // = name=Vimicro USB2.0 Camrera,size=640x480,fps=30
  webCam.start();


}
void captureEvent(Capture webCam) {
    webCam.read();
    }

void draw(){

  image(webCam, 0, 0);
   if(foreGExists){
     set(0, 0, finalImg);
   }else{
     set(0,0, webCam);
   }
}


void keyPressed(){

  if( key == 'b' ){
    PImage temp = new PImage(640, 480);
    temp.loadPixels();
    temp.pixels = webCam.pixels;
    temp.updatePixels();
    finalImgcv = new OpenCV(this, temp);

  }

  if( key == 'f' ){
    finalImgcv.diff(webCam);
    finalImgcv.threshold(10);
    finalImg = finalImgcv.getSnapshot();
    contours = finalImgcv.findContours();
    println("found " + contours.size() + " contours");
    M = contours.get(0);

    foreGExists = true;
  }
}

Does OpenKinect-for-Processing library recognize different body parts?

$
0
0

I have questions about OpenKinect-for-Processing library. I am going to use Kinect1 with Processing3. What I would like to do is using Kinect to get the coordinates of body parts, such as head, hands, etc. Screen Shot 2017-10-18 at 06.20.32

Does OpenKinect-for-Processing library has some functionalities which allow me to do that? Is it possible to enable Kinect to recognise different body parts? Thank you very much for your help.

move blob with hand tracking HELP!

$
0
0

The blob is moving according to the coordinates of the mouse. But I want to move the blob using the coordinates of my hand. So far, what I've done, if you connect Kinect now the blob and the red ball move along the individual variable (blob = mouse X and Y, red ball = hand tracking), but I hope that the blob tracks the hand instead of the red ball.

Please don't mind the title of the movie. was just puppy footage. i used processing v2.2.1 and simpleopenNI 1.96 thank you :)


import SimpleOpenNI.*; import processing.video.*;

Movie movie; SimpleOpenNI context;

color[] userClr = new color[] { color(255, 0, 0), color(0, 255, 0), color(0, 0, 255), color(255, 255, 0), color(255, 0, 255), color(0, 255, 255) };

void setup() { size(854, 480); movie = new Movie(this, "dog2.mp4"); movie.loop();

context = new SimpleOpenNI(this); if (context.isInit() == false) { println("Can't init SimpleOpenNI, maybe the camera is not connected!"); exit(); return; }

context.setMirror(true); context.enableDepth(); context.enableUser();

background(255, 255, 255);

stroke(0, 255, 0); strokeWeight(3); smooth(); }

void movieEvent(Movie movie) { movie.read(); }

void draw() {

context.update(); image(movie, 0, 0); loadPixels(); movie.loadPixels();

for (int x = 0; x < movie.width; x++ ) { for (int y = 0; y < movie.height; y++ ) {

  int loc = x + y*movie.width;

  float r = red  (movie.pixels[loc]);
  float g = green(movie.pixels[loc]);
  float b = blue (movie.pixels[loc]);

  float distance = dist(x, y, mouseX, mouseY);

  float adjustBrightness = map(distance, 0, 150, 2, 0);
  r *= adjustBrightness;
  g *= adjustBrightness;
  b *= adjustBrightness;

  // Constrain RGB to between 0-255
  r = constrain(r, 0, 255);
  g = constrain(g, 0, 255);
  b = constrain(b, 0, 255);

  // Make a new color and set pixel in the window
  color c = color(r, g, b);
  pixels[loc] = c;
}

}

updatePixels();

int[] userList = context.getUsers(); for (int i=0; i<userList.length; i++) { if (context.isTrackingSkeleton(userList[i])) { stroke(userClr[ (userList[i] - 1) % userClr.length ] ); drawSkeleton(userList[i]); } } }

// draw the skeleton with the selected joints void drawSkeleton(int userId) { // to get the 3d joint data

PVector jointPos = new PVector(); context.getJointPositionSkeleton(userId,SimpleOpenNI.SKEL_LEFT_HAND,jointPos); println(jointPos);

fill(255, 0, 0, 100); noStroke();

PVector rightHand = new PVector(); context.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, rightHand); PVector convertedRightHand = new PVector(); context.convertRealWorldToProjective(rightHand, convertedRightHand); ellipse(convertedRightHand.x, convertedRightHand.y, 50, 50);

PVector leftHand = new PVector(); context.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, leftHand); PVector convertedLeftHand = new PVector(); context.convertRealWorldToProjective(leftHand, convertedLeftHand); ellipse(convertedLeftHand.x, convertedLeftHand.y, 50, 50); }

void onNewUser(SimpleOpenNI curContext, int userId) { println("onNewUser - userId: " + userId); println("\tstart tracking skeleton");

curContext.startTrackingSkeleton(userId); }

void onLostUser(SimpleOpenNI curContext, int userId) { println("onLostUser - userId: " + userId); }

void onVisibleUser(SimpleOpenNI curContext, int userId) { //println("onVisibleUser - userId: " + userId); }

downloadable binary version of the libfreenect driver for Windows 8

$
0
0

Is there a downloadable binary version of the libfreenect driver for Windows 8 that I can download, or do I have to build it from sources?

Thanks,

Ken

Export positions of joints in Skeleton Tracking function (in KinectPV2) to obj file?

$
0
0

Hi everyone,

I am working on a project about hand tracking. I am trying to track down the trace of my moving hands and export the trace as an 3d file like .obj

I found the SkeletonColor example in KinectPV2 (https://github.com/ThomasLengeling/KinectPV2/tree/master/KinectPV2/examples/SkeletonColor),

which can help me to get the location of my hand

and RecordPointCloud example also in KinectPV2 (https://github.com/ThomasLengeling/KinectPV2/tree/master/KinectPV2/examples/RecordPointCloud),

which can help me record and export the PointCloud as an .obj file at each frame.

I am wondering is there any way to combine these two together?

I believe here is how the author start to export the location data to 3d files: (https://github.com/ThomasLengeling/KinectPV2/blob/master/KinectPV2/examples/RecordPointCloud/RecordPointCloud.pde))

void draw() {
   ...
  //get the points in 3d space
  FloatBuffer pointCloudBuffer = kinect.getPointCloudDepthPos();

  //allocate the current pointCloudBuffer into an array of FloatBuffers
  allocateFrame(pointCloudBuffer);

  //when the allocation is done write the obj frames
  writeFrames();
  ...
}

//allocate all the frame in a temporary array
void allocateFrame(FloatBuffer buffer) {
  if (recordFrame) {
    if ( frameCounter < numFrames) {
      FrameBuffer frameBuffer = new FrameBuffer(buffer);
      frameBuffer.setFrameId(frameCounter);
      mFrames.add(frameBuffer);
    } else {
      recordFrame = false;
      doneRecording = true;
    }
    frameCounter++;
  }
}

//Write all the frames recorded
void writeFrames() {
  if (doneRecording) {
    for (int i = 0; i < mFrames.size(); i++) {
      FrameBuffer fBuffer =  (FrameBuffer)mFrames.get(i);
      fBuffer.saveOBJFrame();
    }
    doneRecording = false;
    println("Done Recording frames: "+numFrames);
  }
}

I was trying to find any similar function for the skeleton class to try to transform

FloatBuffer pointCloudBuffer = kinect.getPointCloudDepthPos();

to something like:

FloatBuffer handPositionBuffer = kinect.getHandDepthPos();

but couldn't find any function like "getHandDepthPos"

and I couldn't find any function return type is FloatBuffer in the class

(where the author defines the "getPointCloudDepthPos()" function.) --> (https://github.com/ThomasLengeling/KinectPV2/blob/3bda24ba8b7c62155bf308c0d86c961ca89dbfa3/KinectPV2/src/KinectPV2/Device.java)

I also found that the specific joints I want to track are

    public final static int JointType_WristLeft     = 6;
    public final static int JointType_HandLeft      = 7;
    public final static int JointType_WristRight    = 10;
    public final static int JointType_HandRight     = 11;
    public final static int JointType_HandTipLeft   = 21;
    public final static int JointType_ThumbLeft     = 22;
    public final static int JointType_HandTipRight  = 23;
    public final static int JointType_ThumbRight    = 24;

from here (https://github.com/ThomasLengeling/KinectPV2/blob/master/KinectPV2/src/KinectPV2/SkeletonProperties.java)

Is anyone know to can I get these 8 specific joints I want to get and export them to 3d files? Or is there any other approach would be more simple to achieve my goal?

Many thanks!

Colored window instead of skeleton with Thomas Sanchez skeleton3D sketch

$
0
0

Hello ! I am encountering some difficulties with Thomas Sanchez Lengeling's processing sketch "skeleton3D" in the "OpenCV-Processing" library. I've a kinect V2 that seems to be well installed since it works like a charm with others sketches. But when i run this sketch i've a whole colored window instead of seeing the skeleton when detected. I've no error messages in the console window in Processing when running the sketch. I work on W10 and Processing 3.2.4 Any idea ?

Viewing all 530 articles
Browse latest View live