Quantcast
Channel: Kinect - Processing 2.x and 3.x Forum
Viewing all 530 articles
Browse latest View live

Having trouble with some code

$
0
0

So, I am working on a program using the xbox kinect, i keep getting an error saying that my avgY doesnt exist (towards the bottom of the code in the if statement.) Any help? I am following Shiffmans videos on how to make a color tracking BTW

import org.openkinect.freenect.*;
import org.openkinect.freenect2.*;
import org.openkinect.processing.*;
import org.openkinect.tests.*;

import processing.video.*;

Kinect kinect;
Capture video;

color trackColor;

float threshold = 25;

void setup() {
  size(640, 520);
  kinect = new Kinect(this);
  video = new Capture(this, width, height);
  kinect.initVideo();
}

void draw(){
  background(0);
  image(kinect.getVideoImage(), 0, 0);

 float Record = 500;

 int avgX = 0;
 int avgY = 0;

 int count = 0;

 for (int x=0; x< video.width; x++){
   for (int y =0; y < video.height; y++){
     int loc = x+y * video.width;

     color currentColor = video.pixels[loc];
     float r1 = red(currentColor);
     float g1 = blue(currentColor);
     float b1 = green(currentColor);
     float r2 = red(trackColor);
     float g2 = blue(trackColor);
     float b2 = green(trackColor);

     float d = dist(r1,b1,g1,r2,b2,g2);

     if (d < threshold){
      Record = d;
      avgX += x;
      avgY += y;
      count++;

     }
   }
   }



if (count > 0){
 avgX = avgX / count;
 avgy = avgy / count;

 fill(trackColor);
 srokeWeight(3.0);
 stroke(0);
 ellipse(avgX, avgY, 16, 16);
}
}

Kinect v1 - "There are no kinects, returning null"

$
0
0

I have a Kinect v1 (1473) that I'm trying to get working with Processing using Daniel Shiffman's Open Kinect for Processing library.

I've written a very basic program that should give me the depth image. After not getting the program to run at first, I found out that Kinect v1 doesn't need .initDevice();, so I removed it and it ran, but now Processing is giving me a console message stating:

There are no kinects, returning null

And indeed there is no image showing. The device does however show up in device manager as "Kinect for Windows"...

I'm running Windows 10 and Processing 3.2.3. How can I get this to run?

Kinect with Processing on a Mac

$
0
0

Hi I have an interactive project which I now need to implement Kinect on for the client - it's to allow for gesture / finger painting type interaction.

It was built in Processing 3 on an iMac El Capitan.

Some questions:

  1. Which library do I need ? OpenNI ? is it compatible with Processing 3?
  2. If it's not possible to develop on a Mac - if I develop on a PC - will it still run on a Mac if exported as a stand alone app?
  3. What is the best Kinect model to get?
  4. Do I need any extra cables?

Thanks for any help, Glenn.

Face recognition with openCV - compare faces

$
0
0

hi,

I've been looking into the amazing world of openCV for a coming little project of mine. I've managed to track my face using the webcam and do some of the other operations that opvenCV provides. I would like to be able to take one and only one snapshot of each different face that is detected. So I am trying to work out a way to tell whether the tracked face has been captured before or not. Is there a way with openCV to compare two faces and figure out whether it's the same person or not? Something tells me that this is not a straight forward task and requires some advanced deep learning magic, but I'd be curious to hear any thoughts.

thank you

Idk what is wrong (color tracking)

$
0
0

So I am making a color tracking program using the kinect, I am following daniel shiffmans videos. For some reason the program isnt working the way I want tit too. I see the video that the kinect is picking up but I cant do what Shiffman does in his videos where he clicks a color and it follows. Any tips? I think it is because i am using the kinect library and a video library.

import org.openkinect.freenect.*;
import org.openkinect.freenect2.*;
import org.openkinect.processing.*;
import org.openkinect.tests.*;

import processing.video.*;

Kinect kinect;
Capture video;

color trackColor;

float threshold = 25;

void setup() {
  size(640, 520);
  kinect = new Kinect(this);
  video = new Capture(this, width, height);
  kinect.initVideo();
  trackColor = color(255,0,0);
}

void draw(){
  background(0);
  image(kinect.getVideoImage(), 0, 0);


 int avgX = 0;
 int avgY = 0;

 int count = 0;

 for (int x=0; x< video.width; x++){
   for (int y =0; y < video.height; y++){
     int loc = x+y * video.width;

     color currentColor = video.pixels[loc];
     float r1 = red(currentColor);
     float g1 = blue(currentColor);
     float b1 = green(currentColor);
     float r2 = red(trackColor);
     float g2 = blue(trackColor);
     float b2 = green(trackColor);

     float d = dist(r1,b1,g1,r2,b2,g2);

     if (d < threshold){

      avgX += x;
      avgY += y;
      count++;

     }
   }
   }



if (count > 0){
 avgX = avgX / count;
 avgY = avgY / count;

 fill(trackColor);
 strokeWeight(3.0);
 stroke(0);
 ellipse(avgX, avgY, 16, 16);
}
}

void mousePressed(){
 int loc = mouseX + mouseY*video.width;
 trackColor = video.pixels[loc];
}

Can I export a Kinect project with Microsoft SDK from a PC so that it runs on a Mac as an app?

$
0
0

I need it to run as a stand alone app on a Mac - using Microsofts Kinect SDK.

How to make text disappear when blob intersects words using Kinect V1?

$
0
0

Hi guys

I'm using version 1 kinect to make a blob interact with sound-reactive raining text. I was wondering how to make the text disappear once the blob intersects with one of the words? Please help! Thank you!

//Make text individual words disappear (change position off page) if blob position is equal word position.

import org.openkinect.freenect.*;
import org.openkinect.processing.*;
import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.effects.*;
import ddf.minim.signals.*;
import ddf.minim.spi.*;
import ddf.minim.ugens.*;

Minim minim;
AudioInput in;
FFT fft;
KinectTracker tracker;
Kinect kinect;

String [] data = {"327-64-4367", "448-01-4857","553-15-8880","016-10-8387","222-07-1302","574-68-5578","104-42-0570","237-40-7110","212-21-5143","559-30-1997", "574-76-8818", "007-64-7860", "145-42-9696", "574-24-6501", "477-49-7241", "517-62-2683", "315-52-7169", "750-12-8784", "678-20-3681","5895 13th Street, Webster, NY 14580","7776 Grove Street, Romulus, MI 48174","2294 Brook Lane, Paramus, NJ 07652","2424 Route 64, Maineville, OH 45039","9645 Valley View Drive, Tewksbury, MA 01876","153 Cross Street, The Villages, FL 32162","920 Route 30, Loxahatchee, FL 33470","238 Lincoln Street, Jamestown, NY 14701","729 Meadow Street, Dubuque, IA 52001","133 Franklin Court, South Portland, ME 04106","994 Elm Avenue, Sun Prairie, WI 53590","392 Woodland Drive, Liverpool, NY 13090","901 Willow Lane, Bridgeport, CT 06606","92236","07731","45420","10701","08094","07030","19047","41017","17543","80302","80021","60134","32082","(494) 133-5459","(249) 186-5356","(818) 402-5177","(424) 926-1653","(201) 672-5146","(245) 199-2255","(147) 520-8998","(349) 179-5381","(758) 621-4845","(988) 984-0554","(786) 693-1798","(291) 731-8164","(345) 209-6039","(537) 175-2683","(918) 990-4772","(820) 906-9228","(916) 565-9637","(633) 293-3501","(881) 991-5264","(193) 382-1818","(329) 603-4359","(160) 347-3446","(870) 293-5829","5165 1130 7400 9060","6011 5436 3995 4943","3455 419447 60971","3486 888573 38316","6011 5910 4628 7894","6011 7865 3449 2309","4929 2788 4706 8154","5427 8194 1616 5774","5446 9823 4141 6972","6011 2940 9543 3242","4532 8600 5212 9117","6011 9832 4310 8231","3784 431486 81659","5520 6129 5998 3075","4485 5114 2769 9519","4716 7962 0326 5780","3418 877718 33322","4539 1378 7971 4063","3730 394465 12151","4539 4187 1183 6953","5295 2968 4751 7338","3755 105752 68078","157.95.65.166","107.165.28.12","75.62.96.132","249.88.95.147","187.244.83.79","134.131.159.96","49.206.109.182","95.33.230.69","211.162.26.78","186.58.2.250","PcCX5mQhf","s58@ubFUM","n1IPf4zJo","ShSUN0pPc","d-2eug@Qj","zy*@fYOaG","npujcoBVE","Kenzie Flores","Shamar Fox","Nadia Hatfield","Brodie Hood","Nathanial James","Leanna Woodard","Wilson Murillo","Lillie Sullivan","Lillianna Mcintyre","Edward Chan","Jay Chambers","Shirley Vaughn","Jaylah Daniel","Eden Gordon","Joanna Freeman","Jazmyn Hamilton","Molly Costa","Julie Snow","Kole Wu","Raul Santos","Jadiel Mercado","Anna Silva","Mckinley Wyatt","Lance Odom","Nylah Whitney","Melanie Berger","Colin Strong","Irvin Kent","Donna Levine","Tess Wilkinson","Micheal Shields","Helen Rocha","Destinee Bowers","Keyon Graves","Bradley Knight","Kaitlynn Santos","Natalya Hernandez","Humberto Knapp","Camryn Farrell","Pablo Aguilar"};  //40

String note;
color c;
int n;
int noteNumber;
int sampleRate= 44100;

float [] max= new float [sampleRate/2];
float maximum;
float frequency;
float hertz;
float midi;
float deg;
float brightnessThresh;

boolean ir = false;
boolean colorDepth = false;
boolean mirror = false;

Drop[] drops = new Drop[150];

void setup() {
  size(1280, 520); //size of kinect screen
  textSize(8);
  kinect = new Kinect(this);
  tracker = new KinectTracker();
  kinect.initDepth();
  kinect.initVideo();
  //kinect.enableIR(ir);
  kinect.enableColorDepth(colorDepth);

  brightnessThresh = 0;

  //mirror Image
  mirror = !mirror;
  kinect.enableMirror(mirror);

  deg = kinect.getTilt();
  // kinect.tilt(deg);

  minim = new Minim(this);
  minim.debugOn();
  in = minim.getLineIn(Minim.MONO, 4096, sampleRate);
  fft = new FFT(in.left.size(), sampleRate);

  textAlign(CENTER);
  for (int i = 0; i < drops.length; i++) {
    drops[i] = new Drop();
  }
}

void draw() {
  background(0);
  //smooth();
  findNote();
  image(kinect.getVideoImage(), 0, 0);
  image(kinect.getDepthImage(), 640, 0); //

  // Run the tracking analysis
  tracker.track();
  // Show the image
  tracker.display();

  int t = tracker.getThreshold();

  for (int i = 0; i < drops.length; i++) {
    drops[i].fall();
    drops[i].show();
  }

  fill(255);
  /*text(
    "Press 'i' to enable/disable between video image and IR image,  " +
    "Press 'c' to enable/disable between color depth and gray scale depth,  " +
    "Press 'm' to enable/diable mirror mode, "+
    "UP and DOWN to tilt camera   " +
    "Framerate: " + int(frameRate), 10, 515);
  text("threshold: " + t + "    " +  "framerate: " + int(frameRate) + "    " +
    "UP increase threshold, DOWN decrease threshold", 10, 500); */

  }

void keyPressed() {
  int t = tracker.getThreshold();
  if (key == 'i') {
    //ir = !ir;
    //kinect.enableIR(ir);
  } else if (key == 'c') {
    colorDepth = !colorDepth;
    kinect.enableColorDepth(colorDepth);
  } else if (key == CODED) {
    if (keyCode == UP) {
      deg++;
    } else if (keyCode == DOWN) {
      deg--;
    } else if (key == CODED) {
      if (keyCode == RIGHT) {
        t+=5;
      tracker.setThreshold(t);
      }
    } else if (key == CODED) {
      if (keyCode == LEFT) {
        t-=5;
      tracker.setThreshold(t);
      }
    }
    deg = constrain(deg, 0, 30);
    kinect.setTilt(deg);
  }
}

class KinectTracker {
  int threshold = 1000; //depth threshold
  int[] depth;
  PImage display;

  KinectTracker() {
    kinect.initDepth();
    kinect.enableMirror(true);
    // Make a blank image
    display = createImage(kinect.width, kinect.height, RGB);

  }

  void track() {
    depth = kinect.getRawDepth(); //Get raw depth as array of integers

    if (depth == null) return;

    float sumX = 0;
    float sumY = 0;
    float count = 0;

    for (int x = 0; x < kinect.width; x++) {
      for (int y = 0; y < kinect.height; y++) {

        int offset =  x + y*kinect.width;
        // Grabbing the raw depth
        int rawDepth = depth[offset];

        // Testing against threshold
        if (rawDepth < threshold) {
          sumX += x;
          sumY += y;
          count++;
        }
      }
    }
  }

  void display() {
    PImage img = kinect.getDepthImage();

    if (depth == null || img == null) return;

    // Going to rewrite the depth image to show which pixels are in threshold
    display.loadPixels();
    for (int x = 0; x < kinect.width; x++) {
      for (int y = 0; y < kinect.height; y++) {

        int offset = x + y * kinect.width;
        // Raw depth
        int rawDepth = depth[offset];
        int pix = x + y * display.width;
        if (rawDepth < threshold) {
          display.pixels[pix] = color(c, 150); //cream color 252,251,227
        } else {
          display.pixels[pix] = color(0);
        }
      }
    }
    display.updatePixels();


    // Draw blob image
    image(display, 0, 0);


  }

  int getThreshold() {
    return threshold;
  }
  void setThreshold(int t) {
    threshold =  t;
  }
}


//NOTES With Sounds
void findNote() {
  fft.forward(in.left);
  for (int f=0;f<sampleRate/2;f++) { //analyses the amplitude of each frequency analysed, between 0 and 22050 hertz
    max[f]=fft.getFreq(float(f)); //each index is correspondent to a frequency and contains the amplitude value
  }
  maximum=max(max);//get the maximum value of the max array in order to find the peak of volume

  for (int i=0; i<max.length; i++) {
    if (max[i] == maximum) {
      frequency= i;
    }
  }

  midi= 69+12*(log((frequency-6)/440));// formula that transform frequency to midi numbers
  n= int (midi);

//the octave has 12 tones and semitones.
if (n%12==9)
  {
    note = ("a");
    c = color (255, 99, 0);
  }

  if (n%12==10)
  {
    note = ("a#");
    c = color (255, 236, 0);
  }

  if (n%12==11)
  {
    note = ("b");
    c = color (153, 255, 0);
  }

  if (n%12==0)
  {
    note = ("c");
    c = color (40, 255, 0);
  }

  if (n%12==1)
  {
    note = ("c#");
    c = color (0, 255, 232);
  }

  if (n%12==2)
  {
    note = ("d");
    c = color (0, 124, 255);
  }

  if (n%12==3)
  {
    note = ("d#");
    c = color (5, 0, 255);
  }

  if (n%12==4)
  {
    note = ("e");
    c = color (69, 0, 234);
  }

  if (n%12==5)
  {
    note = ("f");
    c = color (85, 0, 79);
  }

  if (n%12==6)
  {
    note = ("f#");
    c = color (116, 0, 0);
  }

  if (n%12==7)
  {
    note = ("g");
    c = color (179, 0, 0);
  }

  if (n%12==8)
  {
    note = ("g#");
    c = color (238, 0, 0);
  }
}

void stop()
{
  in.close();
  minim.stop();

  super.stop();
}

class Drop {
  float x;
  float y;
  float z;
  float len;
  float yspeed;
  String textHolder = "text";
  float word;

  Drop() {
    x  = random(width); //rain drops at random width on x-axis
    y  = random(0, 700); //sections of rain
    z  = random(0, 1); //general speed
    len = map(z, 0, 20, 10, 20);
    yspeed  = map(z, 0, 20, 5, 2); //Speeds of raindrops, 4 is variation

    textHolder = data[int(random(data.length))];

  }

  void fall() { //speed of rain
    y = y + yspeed;
    float grav = map(z, 0, 20, 0, 0.2); //(z, 0, 20, 0, 0.2)
    yspeed = yspeed - grav; //+ grav;


    if (y > height) {
      y = random(-500, 40);
      yspeed = map(z, 0, 20, 4, 20);  //(z, 0, 20, 4, 1000);
    }
  }

  void show() {
    float thick = map(z, 0, 20, 1, 3);

    fill(c);
    text(textHolder, x, y, y+len);
  }
}

Problems with kinect4WinSDK

$
0
0

Hey, I've got a kinect v2 and tried to get kinect4WinSDK lib running. But I'm getting following error messages: "A library relies on native code that's not available. Or only works properly when the sketch is run as a 64-bit application."

I already moved the Processing file with the processing.exe to C:, I uninstalled the 64-bit version (there I got the same errormessage, just with 32-bit), replaced it with 32-bit. I uninstalled the kinect4WinSDK lib and reinstalled it. All this didn't work. I've googled a lot, but found nothing that helped.


Kinect and colour mapping

$
0
0

Hey guys, I'm using my kinect depth sensor with a fire burning script. It uses the kinect depth to affect the probability of burning I would like to use the kinect to create a series of colors between certain elevations. These colors should also correspond to certain land types that effect the probability of burning.

I have the script set up so far... based on an initial fire script but I am unsure of how to make the depth visible as colors.

This is what I wrote to use the kinect as a lattice that maps color values

void populateLatticeFromKinect() { // loop through kernels // for each kernel { for( int x = 0; x < lat1.w; x++ ){ // for( int y = 0; y < lat1.h; y++ ){ for( float x = 0; x < depthLat.w; x++ ){ for( float y = 0; y < depthLat.h; y++ ){

// depth = kinect depth at this point

         float depth =  depthLat.get((int)x,(int)y);

if (depth < 0.8 && depth >0.6){depth = FUEL; }

         depthLat.put((int) x,(int)y, depth );

// if(depth < 0.3 && depth > 0.15) { //set it to a value } } }

Ive attached the full script here... Also I have attached the intial script which I am working from. (I'm not sure if this will work) Any help would be greatly appreciated. I'm a novice coder and I'm still wrapping my head around the logic of processing.

For Mac development, do I need a Kinect Adapter for Windows ?

$
0
0

I'm about to start a project using Processing and Open Kinect on my iMac - I'm about to purchase a Kinect V2 but I'm unsure if I need the Windows adapter or not - could someone please advise me, thanks, Glenn.

Kinect with grid

$
0
0

Im new with processing, im trying to make a fade background and as the user move the movement will stay on the grid.

So i have made a grid, where i move the can see my body movement on the grid.

1) how do i make the movement stay in grid after i move.

2) how do i make ... example = player 1 = red, player 2 = green.

3)background color fade random rgb.

import processing.serial.*;
import KinectPV2.*;


Serial myPort;
KinectPV2 kinect;

boolean foundUsers = false;

int matrixSizeWidth = 14;
int matrixSizeHeight = 18;

void setup() {
  size(1024, 424);
  frameRate(200);

  //String portName = "COM5";
 // myPort = new Serial(this, portName, 115200);

  kinect = new KinectPV2(this);

  //kinect.enableDepthImg(true);
  kinect.enableBodyTrackImg(true);

  kinect.init();

  delay(6000);
}

void draw() {
  clear();
  background(255);

  PImage kinectImg = kinect.getBodyTrackImage();

  image(kinectImg, 512, 0);

  int [] rawData = kinect.getRawBodyTrack();

  foundUsers = false;
  //iterate through 1/5th of the data
  for(int i = 0; i < rawData.length; i+=5){
    if(rawData[i] != 255){
     //found something
     foundUsers = true;
     break;
    }
  }
 // print(rawData);

  int totalPixels = 0;

  //if (foundUsers)
  //{
   // myPort.clear();

    int row = 0;

    //color toColor = -(int)random(255*255*255);
    color thatColor = -(int)random(255*255*255);

    for (int y = 10; y < 414; y += matrixSizeHeight){
      int col = 0;

      for (int x = 10; x < 500; x += matrixSizeWidth) {
        color c = kinectImg.pixels[x + y*kinectImg.width];

        // print(c);

        if(c < -1)
        {
        //c = toColor;
        }
        else{
        c = thatColor;
        }

        print(c);

        fill(c);
        stroke(0);
        strokeWeight(1);
        rect(x, y, matrixSizeWidth, matrixSizeHeight);

        //if (totalPixels < 105)
        //{
          if (c != -1)
          {
            //print(1);
           // myPort.write("H");
            totalPixels++;
            //myPort.write(totalPixels);
          }
          else
          {
            //print(0);
           // myPort.write("L");
          }
        //}


        col++;
      }

      row++;
    }


   // myPort.write("C");

  fill(0);
  textSize(14);
  text(kinect.getNumOfUsers(), 10, 30);
  text("Found User: "+foundUsers, 10, 50);
  text(frameRate, 10, 70);
  text("Total Pixels: " + totalPixels, 10, 90);
}

How to implement spout 2.05 in 'processing ' windowsX?

$
0
0

Forgive the very basic nature of this query but i am very new to processing (and indeed programming). I am trying to use kinect 1414 with processing 2.2.1 and isadora acording to tutorial http://troikatronix.com/support/kb/kinect-tutorial-part2/, Since the tutorial spout has been upgraded and I am trying without success to change code according to recommendations for 2.05 https://github.com/leadedge/SpoutProcessing/releases. I have imprted spout library. The original code for processing sketch is below. /* -------------------------------------------------------------------------- * SimpleOpenNI User Test * -------------------------------------------------------------------------- * Processing Wrapper for the OpenNI/Kinect 2 library * http://code.google.com/p/simple-openni * -------------------------------------------------------------------------- * prog: Max Rheiner / Interaction Design / Zhdk / http://iad.zhdk.ch/ * date: 12/12/2012 (m/d/y) * ---------------------------------------------------------------------------- */

import SimpleOpenNI.*;

PGraphics canvas; color[] userClr = new color[] { color(255, 0, 0), color(0, 255, 0), color(0, 0, 255), color(255, 255, 0), color(255, 0, 255), color(0, 255, 255) };

PVector com = new PVector();
PVector com2d = new PVector();

// -------------------------------------------------------------------------------- // CAMERA IMAGE SENT VIA SPOUT // -------------------------------------------------------------------------------- int kCameraImage_RGB = 1; // rgb camera image int kCameraImage_IR = 2; // infra red camera image int kCameraImage_Depth = 3; // depth without colored bodies of tracked bodies int kCameraImage_User = 4; // depth image with colored bodies of tracked bodies

int kCameraImageMode = kCameraImage_User; // << Set thie value to one of the kCamerImage constants above

// -------------------------------------------------------------------------------- // SKELETON DRAWING // -------------------------------------------------------------------------------- boolean kDrawSkeleton = true; // << set to true to draw skeleton, false to not draw the skeleton

// -------------------------------------------------------------------------------- // OPENNI (KINECT) SUPPORT // --------------------------------------------------------------------------------

import SimpleOpenNI.*; // import SimpleOpenNI library

SimpleOpenNI context;

private void setupOpenNI() { context = new SimpleOpenNI(this); if (context.isInit() == false) { println("Can't init SimpleOpenNI, maybe the camera is not connected!"); exit(); return; }

// enable depthMap generation
context.enableDepth();
context.enableUser();

// disable mirror
context.setMirror(false);

}

private void setupOpenNI_CameraImageMode() { println("kCameraImageMode " + kCameraImageMode);

switch (kCameraImageMode) {
case 1: // kCameraImage_RGB:
    context.enableRGB();
    println("enable RGB");
    break;
case 2: // kCameraImage_IR:
    context.enableIR();
    println("enable IR");
    break;
case 3: // kCameraImage_Depth:
    context.enableDepth();
    println("enable Depth");
    break;
case 4: // kCameraImage_User:
    context.enableUser();
    println("enable User");
    break;
}

}

private void OpenNI_DrawCameraImage() { switch (kCameraImageMode) { case 1: // kCameraImage_RGB: canvas.image(context.rgbImage(), 0, 0); // println("draw RGB"); break; case 2: // kCameraImage_IR: canvas.image(context.irImage(), 0, 0); // println("draw IR"); break; case 3: // kCameraImage_Depth: canvas.image(context.depthImage(), 0, 0); // println("draw DEPTH"); break; case 4: // kCameraImage_User: canvas.image(context.userImage(), 0, 0); // println("draw DEPTH"); break; } }

// -------------------------------------------------------------------------------- // OSC SUPPORT // --------------------------------------------------------------------------------

import oscP5.*; // import OSC library import netP5.*; // import net library for OSC

OscP5 oscP5; // OSC input/output object NetAddress oscDestinationAddress; // the destination IP address - 127.0.0.1 to send locally int oscTransmitPort = 1234; // OSC send target port; 1234 is default for Isadora int oscListenPort = 9000; // OSC receive port number

private void setupOSC() { // init OSC support, lisenting on port oscTransmitPort oscP5 = new OscP5(this, oscListenPort); oscDestinationAddress = new NetAddress("127.0.0.1", oscTransmitPort); }

private void sendOSCSkeletonPosition(String inAddress, int inUserID, int inJointType) { // create the OSC message with target address OscMessage msg = new OscMessage(inAddress);

PVector p = new PVector();
float confidence = context.getJointPositionSkeleton(inUserID, inJointType, p);

// add the three vector coordinates to the message
msg.add(p.x);
msg.add(p.y);
msg.add(p.z);

// send the message
oscP5.send(msg, oscDestinationAddress);

}

private void sendOSCSkeleton(int inUserID) { sendOSCSkeletonPosition("/head", inUserID, SimpleOpenNI.SKEL_HEAD); sendOSCSkeletonPosition("/neck", inUserID, SimpleOpenNI.SKEL_NECK); sendOSCSkeletonPosition("/torso", inUserID, SimpleOpenNI.SKEL_TORSO);

sendOSCSkeletonPosition("/left_shoulder", inUserID, SimpleOpenNI.SKEL_LEFT_SHOULDER);
sendOSCSkeletonPosition("/left_elbow", inUserID, SimpleOpenNI.SKEL_LEFT_ELBOW);
sendOSCSkeletonPosition("/left_hand", inUserID, SimpleOpenNI.SKEL_LEFT_HAND);

sendOSCSkeletonPosition("/right_shoulder", inUserID, SimpleOpenNI.SKEL_RIGHT_SHOULDER);
sendOSCSkeletonPosition("/right_elbow", inUserID, SimpleOpenNI.SKEL_RIGHT_ELBOW);
sendOSCSkeletonPosition("/right_hand", inUserID, SimpleOpenNI.SKEL_RIGHT_HAND);

sendOSCSkeletonPosition("/left_hip", inUserID, SimpleOpenNI.SKEL_LEFT_HIP);
sendOSCSkeletonPosition("/left_knee", inUserID, SimpleOpenNI.SKEL_LEFT_KNEE);
sendOSCSkeletonPosition("/left_foot", inUserID, SimpleOpenNI.SKEL_LEFT_FOOT);

sendOSCSkeletonPosition("/right_hip", inUserID, SimpleOpenNI.SKEL_RIGHT_HIP);
sendOSCSkeletonPosition("/right_knee", inUserID, SimpleOpenNI.SKEL_RIGHT_KNEE);
sendOSCSkeletonPosition("/right_foot", inUserID, SimpleOpenNI.SKEL_RIGHT_FOOT);

}

// -------------------------------------------------------------------------------- // SPOUT SUPPORT // --------------------------------------------------------------------------------

Spout server;

private void setupSpoutServer(String inServerName, int inWidth, int inHeight) { // Create syhpon server to send frames out. server = new Spout();

server.initSender(inServerName, inWidth, inHeight);

}

// -------------------------------------------------------------------------------- // EXIT HANDLER // -------------------------------------------------------------------------------- // called on exit to gracefully shutdown the Syphon server private void prepareExitHandler() { Runtime.getRuntime().addShutdownHook( new Thread( new Runnable() { public void run () { try { // if (server.hasClients()) { server.closeSender(); // } } catch (Exception ex) { ex.printStackTrace(); // not much else to do at this point } } } ) ); }

// -------------------------------------------------------------------------------- // MAIN PROGRAM // -------------------------------------------------------------------------------- void setup() { int canvasWidth = 640; int canvasHeight = 480;

size(canvasWidth, canvasHeight, P3D);
canvas = createGraphics(canvasWidth, canvasHeight, P3D);

textureMode(NORMAL);

println("Setup Canvas");

// canvas.background(200, 0, 0);
canvas.stroke(0, 0, 255);
canvas.strokeWeight(3);
canvas.smooth();
println("-- Canvas Setup Complete");

// setup Syphon server
println("Setup Spout");
setupSpoutServer("Depth", canvasWidth, canvasHeight);

// setup Kinect tracking
println("Setup OpenNI");
setupOpenNI();
setupOpenNI_CameraImageMode();

// setup OSC
println("Setup OSC");
setupOSC();

// setup the exit handler
println("Setup Exit Handerl");
prepareExitHandler();

}

void draw() { // update the cam context.update();

canvas.beginDraw();

// draw image
OpenNI_DrawCameraImage();

// draw the skeleton if it's available
if (kDrawSkeleton) {

    int[] userList = context.getUsers();
    for (int i=0; i<userList.length; i++)
    {
        if (context.isTrackingSkeleton(userList[i]))
        {
            canvas.stroke(userClr[ (userList[i] - 1) % userClr.length ] );

            drawSkeleton(userList[i]);

            if (userList.length == 1) {
                sendOSCSkeleton(userList[i]);
            }
        }

        // draw the center of mass
        if (context.getCoM(userList[i], com))
        {
            context.convertRealWorldToProjective(com, com2d);

            canvas.stroke(100, 255, 0);
            canvas.strokeWeight(1);
            canvas.beginShape(LINES);
            canvas.vertex(com2d.x, com2d.y - 5);
            canvas.vertex(com2d.x, com2d.y + 5);
            canvas.vertex(com2d.x - 5, com2d.y);
            canvas.vertex(com2d.x + 5, com2d.y);
            canvas.endShape();

            canvas.fill(0, 255, 100);
            canvas.text(Integer.toString(userList[i]), com2d.x, com2d.y);
        }
    }
}

canvas.endDraw();

image(canvas, 0, 0);

// send image to spout
server.sendTexture();

}

// draw the skeleton with the selected joints void drawLimb(int userId, int inJoint1) { }

// draw the skeleton with the selected joints void drawSkeleton(int userId) { canvas.stroke(255, 255, 255, 255); canvas.strokeWeight(3);

drawLimb(userId, SimpleOpenNI.SKEL_HEAD, SimpleOpenNI.SKEL_NECK);

drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_LEFT_SHOULDER);
drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW);
drawLimb(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, SimpleOpenNI.SKEL_LEFT_HAND);

drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_RIGHT_SHOULDER);
drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW);
drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, SimpleOpenNI.SKEL_RIGHT_HAND);

drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_TORSO);

drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_LEFT_HIP);
drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HIP, SimpleOpenNI.SKEL_LEFT_KNEE);
drawLimb(userId, SimpleOpenNI.SKEL_LEFT_KNEE, SimpleOpenNI.SKEL_LEFT_FOOT);

drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_RIGHT_HIP);
drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HIP, SimpleOpenNI.SKEL_RIGHT_KNEE);
drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_KNEE, SimpleOpenNI.SKEL_RIGHT_FOOT);

}

void drawLimb(int userId, int jointType1, int jointType2) { float confidence;

// draw the joint position
PVector a_3d = new PVector();
confidence = context.getJointPositionSkeleton(userId, jointType1, a_3d);
PVector b_3d = new PVector();
confidence = context.getJointPositionSkeleton(userId, jointType2, b_3d);

PVector a_2d = new PVector();
context.convertRealWorldToProjective(a_3d, a_2d);
PVector b_2d = new PVector();
context.convertRealWorldToProjective(b_3d, b_2d);

canvas.line(a_2d.x, a_2d.y, b_2d.x, b_2d.y);

}

// ----------------------------------------------------------------- // SimpleOpenNI events

void onNewUser(SimpleOpenNI curContext, int userId) { println("onNewUser - userId: " + userId); println("\tstart tracking skeleton");

curContext.startTrackingSkeleton(userId);

}

void onLostUser(SimpleOpenNI curContext, int userId) { println("onLostUser - userId: " + userId); }

void onVisibleUser(SimpleOpenNI curContext, int userId) { //println("onVisibleUser - userId: " + userId); }

void keyPressed() { switch(key) { case ' ': context.setMirror(!context.mirror()); println("Switch Mirroring"); break; } }

SimpleOpenNI Library error occurs after update processing

$
0
0

I am trying to make a project with kinect v.1 and processing 3.2.1. Before I updated the processing, everything was working (I just tried the "Hello" examples of Daniel Shiffman). But now I use processing 3.2.1 and "SimpleOpenNI library can not find" error occurs. I deleted all libraries and downloaded them again. I download the library from this link: https://code.google.com/archive/p/simple-openni/downloads and even I download the library in ..sketchfolder/libraries , I can't run the example codes. I am using Windows 10. How can I fix this problem? I just want to run the example codes simply.

How to do skeleton tracking through defining a class?

$
0
0

Dear All,

Thank you for your investigations in processing. I am going to implement a project. In this project I'm going to play a video in background and implement effects through defining different classes. In one of my effects, I have to track skeleton of human bodies. here is my code:

import processing.video.*; import SimpleOpenNI.*; import java.util.*;

Movie movie1; SimpleOpenNI kinect; effect1 x1; effect2 x1; void setup(){ size(1280,960,P2D);

movie1 = new Movie(this, "moon.mov"); kinect = new SimpleOpenNI(this); kinect.enableDepth(); kinect.enableRGB(); kinect.enableUser();

x1=new effect1(kinect);

}

void draw(){

movie1.play();

int m = millis(); if((m/1000)<10){ x1.run(); } else{ image(movie1, 0, 0, width, height); }

}

void movieEvent(Movie m) { m.read(); }

class effect1{

int [] userID,userColor; PVector location, velocity, accelration,headPosition,confidenceVector; float confidenceLevel=.4; float confidence=0; effect1(SimpleOpenNI kinect){ headPosition=new PVector(); confidenceVector=new PVector(); if(kinect.isInit() == false){ println("Can't init SimpleOpenNI, maybe the camera is not connected!"); exit(); return;
} }

void run(){ kinect.update(); userID = kinect.getUsers(); background(255); //ellipse(100,100, 20,20); for(int i=0;i<userID.length;i++) { confidence = kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_HEAD,confidenceVector);

  // if confidence of tracking is beyond threshold, then track user
  if(confidence > confidenceLevel)
  {
    // change draw color based on hand id#
    stroke(userColor[(i)]);
    // fill the ellipse with the same color
    fill(userColor[(i)]);
// if Kinect is tracking certain user then get joint vectors
if(kinect.isTrackingSkeleton(userID[i]))
{

kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_TORSO,headPosition); // convert real world point to projective space kinect.convertRealWorldToProjective(headPosition,headPosition); //kinect.convertRealWorldToProjective(headPosition,headPosition); fill(255,0,0); ellipse(headposition.x,headposition.y, 20,20); } } } }
}

****the following error has been generated:****

0 #

A fatal error has been detected by the Java Runtime Environment:

#

EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x00007ffb44597040, pid=1084, tid=0x00000000000013a4

#

JRE version: Java(TM) SE Runtime Environment (8.0_111-b14) (build 1.8.0_111-b14)

Java VM: Java HotSpot(TM) 64-Bit Server VM (25.111-b14 mixed mode windows-amd64 compressed oops)

Problematic frame:

C 0x00007ffb44597040

#

Failed to write core dump. Minidumps are not enabled by default on client versions of Windows

#

An error report file with more information is saved as:

D:\processing project\processing-3.2.3\hs_err_pid1084.log

#

If you would like to submit a bug report, please visit:

http://bugreport.java.com/bugreport/crash.jsp

The crash happened outside the Java Virtual Machine in native code.

See problematic frame for where to report the bug.

# Could not run the sketch (Target VM failed to initialize). For more information, read revisions.txt and Help ? Troubleshooting.

Kinect Colour Tracking Issue

$
0
0

I've been trying to get Kinect and colour tracking together, and recently managed to get some sort of breakthrough. The code is below, but there are a still a few more issues that I found. When I placed in the size in regards to display, the kinect only seemed to capture the top-left part of the screen. Also, another issue I noticed was when I clicked on a colour to track, it seemed that the tracking was following something else (tracking was all over the screen, did not land anything). I am not exactly sure if it is due impart with the size -- but when I increased, the tracker would go off-screen too. Any help would be appreciated, thanks!

    //import processing.video.*;
    import org.openkinect.freenect.*;
    import org.openkinect.freenect2.*;
    import org.openkinect.processing.*;
    import org.openkinect.tests.*;

    // Variable for capture device
    Kinect2 kinect2;

    // A variable for the color we are searching for.
    color trackColor;

    // shows what users see
    PImage display;

    // location
    PVector loc;

    void setup() {
      size(640, 480);
      // Start off tracking for red
      trackColor = color(0, 0, 255); // blue colour
      kinect2 = new Kinect2(this);
      kinect2.initVideo();
      kinect2.initDevice();
      display = createImage(kinect2.depthWidth, kinect2.depthHeight, RGB);
      background(0);

      // setup the vectors
      loc = new PVector(0, 0);
    }

    void draw() {
      // display.updatePixels();

      display = kinect2.getVideoImage();
      image(display, 0, 0);

      display.loadPixels();

      // worldRecord and
      // XY coordinate of closest color
      float worldRecord = 750;
      int closestX = 0;
      int closestY = 0;

      // Begin loop to walk through every pixel
      for (int x = 0; x < kinect2.depthWidth; x ++ ) {
        for (int y = 0; y < kinect2.depthHeight; y ++ ) {
          int loc = x + y*kinect2.depthWidth;
          // What is current color
          color currentColor = display.pixels[loc];
          float r1 = red(currentColor);
          float g1 = green(currentColor);
          float b1 = blue(currentColor);
          float r2 = red(trackColor);
          float g2 = green(trackColor);
          float b2 = blue(trackColor);

          // Using euclidean distance to compare colors
          float d = dist(r1, g1, b1, r2, g2, b2); // We are using the dist( ) function to compare the current color with the color we are tracking.

          // If current color is more similar to tracked color than
          // closest color, save current location and current difference
          if (d < worldRecord) {
            worldRecord = d;
            closestX = x;
            closestY = y;
            display.updatePixels();
            image(display, kinect2.depthWidth, kinect2.depthHeight, 0, 0); // draw image out
          }
        }
      }

      // We only consider the color found if its color distance is less than 10.
      // This threshold of 10 is arbitrary and you can adjust this number depending on how accurate you require the tracking to be.
      if (worldRecord < 10) {
        // Draw a circle at the tracked pixel
        fill(trackColor);
        strokeWeight(4.0);
        stroke(0);
        ellipse(closestX, closestY, 16, 16);
      }
    }

    void mousePressed() {
      // Save color where the mouse is clicked in trackColor variable
      int loc = mouseX + mouseY*display.width;
      println(loc);

      trackColor = display.pixels[loc];
    }

How can I take a specific segment of the rawDepthData in order to make a live mask?

$
0
0

Hi there. I am trying to do live subtraction while filming someone laying on the floor. The Kinect is facing top down 2.5-3.5 metres high and I want to create a mask of the person moving. I am sending two images to Isadora for masking, but would be great to know if I can do the mask in processing and send one masked image with Spout to Resolume Arena for the final compositing. Trying to make this as simple as possible, but have limited experience in code.

I have Windows 8.1 Processing 3.0 with Lengeling's library Resolume Arena 5 Kinect v 2.0

Below is the code I'm trying. I don't know how to scan a certain bit of the rawDepthData I tried an array, but this is not working.

import spout.*; import KinectPV2.*;

PImage img; PGraphics canvas; PGraphics canvas2; KinectPV2 kinect; Spout spout; Spout spout2; int thresholdH = 1200; // max distance (in mm) int thresholdL = 0; // min distance (in mm)

boolean foundUsers = false;

void setup() { size(640, 360, P3D); textureMode(NORMAL); kinect = new KinectPV2(this); kinect.enableDepthImg(true); kinect.enableBodyTrackImg(true); kinect.enableColorImg(true); kinect.init();

canvas = createGraphics(1280, 720, P3D); canvas2 = createGraphics(1280, 720, P3D); img = loadImage("SpoutLogoMarble3.bmp"); spout = new Spout(this); spout2 = new Spout(this); spout.createSender("rgb image"); spout2.createSender("mask");

}

void draw() {

background(0, 90, 100); noStroke(); canvas.beginDraw(); canvas.image(kinect.getColorImage(), 0, 0, 1920, 1080); canvas.endDraw();

      spout.sendTexture(canvas);

int [] rawData = kinect.getRawDepthData(); //read kinect depth canvas2.beginDraw(); canvas2.loadPixels(); // draw the depth image in white between high and low limits for (int x = 0; x < kinect.depthWidth; x++) { for (int y = 0; y < kinect.depthHeight; y++) { int offset = x + y * kinect.depthWidth; int rawDepth = rawData[offset]; int pix = x + y*img.width; if (rawDepth > thresholdL && rawDepth < thresholdH) { canvas2.pixels[pix] = color(255, 255, 255, 255); //draw white inside limits } else { canvas2.pixels[pix] = color(0, 0, 0, 0); //draw black outside limits } } } canvas2.updatePixels(); canvas2.endDraw(); spout2.sendTexture(canvas2);

}

Exported app not running when using OpenKinect library

$
0
0

I'm trying to export an app on osx with processing 3.1.1 and using the OpenKinect library.

It exports ok but just stalls when I try running it. I can't see any error messages anywhere either?

Does Model 1520 work on Mac?

$
0
0

Has anyone gotten Model 1520 (Kinect 2) to work on Mac? I'm running Processing 3 and have installed the Open Kinect library. Attempting to run examples as per Dan Shiffman's videos, but I get "No Device Connected! Cannot Find Devices" errors. I used a Kinect 1 some years ago with Processing on an older computer without trouble, but not having any luck so far with this one.

2013 MacBook Pro, OSX 10.9.5

Thanks!

How to capture a frame and find contours of it?

$
0
0

Hi everybody, I don't know if it's the wrong section, by the way, I have this find contours code in which I contour a specific image from my files, but now I need to change it and I would like to capture a frame with the webcam and then find the contour of it. How can I do it? Thanks to everybody who will help!

here's the code I have now

import gab.opencv.*;

PImage src;
OpenCV opencv;
ArrayList<Contour> contours;
int pointsTot;
int pointsCurr;
float countourApproximation;



////////////////////////////////////////////////////////////////////////////////

void setup()
{
  size( 1080, 720, P2D );

  colorMode( HSB, 360, 100, 100 );

  opencv = new OpenCV( this, 1080, 720 );

  src = loadImage("room.jpg");
  countourApproximation = 2;
  resetContours();
}


////////////////////////////////////////////////////////////////////////////////

void resetContours()
{
  opencv.loadImage( src );

  opencv.gray();
  opencv.blur(5);
  opencv.threshold(60);

  // tutte le operazioni possibili sono elencate e descritte in:
  // http://atduskgreg.github.io/opencv-processing/reference/gab/opencv/OpenCV.html
  // [vedi "Method Summary"]

  contours = opencv.findContours();

  pointsTot = 0;
  for (Contour contour : contours) {
    contour.setPolygonApproximationFactor(countourApproximation);
    pointsTot += contour.getPolygonApproximation().getPoints().size();
  }
  //  pointsCurr = pointsTot-1;
  pointsCurr = 1;
}


////////////////////////////////////////////////////////////////////////////////

void draw()
{
  //  background( 0 );

  if (pointsCurr < pointsTot-1) {
    noTint();
  } else {
    tint( 60 );
  }
  image( src, 0, 0 );

  noFill();
  strokeWeight(3);

  int pointsCount = 0;                                  // numero di punti (dei segmenti) visualizzati
  for (int c=0; c<contours.size(); ++c) {

    Contour contour = contours.get( c );
    ArrayList<PVector> points = contour.getPolygonApproximation().getPoints();

    float h = map( c, 0, contours.size(), 0, 360 );

    beginShape();

    for (int p=0; p<points.size(); ++p) {

      if (pointsCount < pointsCurr) {

        PVector point = points.get( p );

        float s = map( p, 0, points.size(), 20, 100 );

        stroke( h, s, 100 );
        vertex( point.x, point.y );
        //        curveVertex( point.x, point.y );

        ++pointsCount;
      } else {
        break;
      }
    }

    endShape();
  }

  if (pointsCurr < pointsTot-1) {
    ++pointsCurr;
  }
}


////////////////////////////////////////////////////////////////////////////////

void mousePressed()
{

  countourApproximation = exp( random(3.5) );

  resetContours();
}

PBox 2D

$
0
0

Hello ! I need library PBox 2D master to work with the kinect . All that meeting did not work , install , but the sketch always says missing library in question. Someone can help me?

Viewing all 530 articles
Browse latest View live