Quantcast
Channel: Kinect - Processing 2.x and 3.x Forum
Viewing all 530 articles
Browse latest View live

Blob find contours

$
0
0

Hello everyone, I'm using this code:

https://github.com/atduskgreg/opencv-processing/blob/master/examples/FindContours/FindContours.pde

I wish by pressing the button B the contour which found, will reset and becomes ready for a new image.

import g4p_controls.*;
import java.awt.Font;
import gab.opencv.*;

PImage src, dst;
OpenCV opencv;

ArrayList<Contour> contours;
ArrayList<Contour> polygons;

GButton btnFilterA;
boolean filterA=false;

GButton btnFilterB;
boolean filterB=false;

void setup() {
  src = loadImage("1.jpg");
  size(1024, 768);
  opencv = new OpenCV(this, src);

  btnFilterA = new GButton(this, 600, 20, 140, 20);
  btnFilterA.setText("Go");
  btnFilterA.setLocalColorScheme(GCScheme.GREEN_SCHEME);

  btnFilterB = new GButton(this, 750, 20, 140, 20);
  btnFilterB.setText("Stop");
  btnFilterB.setLocalColorScheme(GCScheme.GREEN_SCHEME);

  //opencv.gray();
  //opencv.threshold(70);
  //dst = opencv.getOutput();

  contours = opencv.findContours();
  println("found " + contours.size() + " contours");
}

void draw() {
  //scale(0.5);
  image(src, 0, 0);
  //image(dst, src.width, 0);

  noFill();
  strokeWeight(3);

  for (Contour contour : contours) {

    beginShape();
    for (PVector point : contour.getPolygonApproximation().getPoints()) {
    if (filterA==true){
    stroke(0, 255, 0);
    contour.draw();
    stroke(255, 0, 0);
    vertex(point.x, point.y);

    }
    endShape();
  }
}
}

public void handleButtonEvents(GButton button, GEvent event) {

  if (button == btnFilterA) {
     filterA=true;
  }
    if (button == btnFilterB) {
     contour.clear();

  }
  }

I tried with my code but even if off the contour remains in "memory" of the contour, how I can fix?

thanks


Real time typography with kinect or webcam

$
0
0

Hey everyone. I'm really new to processing so excuse me if this question sounds vague. Basically - I am trying to create a real-time typographic piece similar to the one below using either a Kinect camera or a webcam. I just don't know where to start. The other example feels similar to Shiffman's pointcloud example. I have attempted to implement text into his example but unfortunately I don't really know what I'm doing!

Could someone please help me out!

Thank you

image

image

brightness and contrast of the webcam

$
0
0

Hello everybody, I want that while there is a live webcam it is possible to adjust the brightness and contrast of color and gray images. This is my code:

import processing.video.*;
import g4p_controls.*;
import java.awt.Font;
import gab.opencv.*;


OpenCV opencv;

PFont f;

boolean filterT=false;


Capture cam;

GButton btnFilterT;

int n = 1;

void setup() {
  size(1024, 768);

  f = createFont("Arial", 48, true);

  btnFilterT = new GButton(this, 740, 50, 140, 20);
  btnFilterT.setText("On");
  btnFilterT.setLocalColorScheme(GCScheme.GREEN_SCHEME);

  cam = new Capture(this, 640, 480);
  cam.start();
}


void draw() {

  if (cam.available() == true) {
    cam.read();
  }


  pushMatrix();
  scale(-1, 1);
  image(cam.get(), -width, 0, width, height);
  popMatrix();

  opencv = new OpenCV(this, cam);
  opencv.loadImage(cam);


  if (filterT==true) {
    opencv.brightness((int)map(mouseX, 0, width, -255, 255));
    image(opencv.getOutput(),0, 0, width, height);
  }
}

public void handleButtonEvents(GButton button, GEvent event) {

  if (button == btnFilterT) {
    filterT=true;
  }
}

I have two problems:

if I press the T button to start the brightness and contrast in gray, the screen flips over and becomes no mirror.

how can I have brightness and contrast with the mirror video?

how can I have the brightness and contrast of the webcam color?

thank you

Processing tells me it can't find any Kinect.

$
0
0

Hi all,

Im using Kinect model 1517 and Processing 3.2.1. I recently try to run Shiffman's "OpenKinect-for-processing" examples with kinect connected, but then Processing tells me it can't find any Kinect..

111

Does anyone know why is this happening and how can I fix this?

Thanks in advances!

Kinect Depth Threshold

$
0
0

Hi all I am working on a project using Processing, Kinect and OpenNI.

I want to create a Kinect Depth Image to create a distance threshold. Ie. Pixels within 1m are white, any beyond that are black. I am new to Processing and I am struggling a bit to make the code work.

I found out one of the previous post asked similar problems (https://forum.processing.org/one/topic/kinect-depth-thresholdi.html) but the code wasn't working when I run it/modify it.

Does anyone has experience on this and can give me a hand? Thanks a lot!

How to make a persons body interact with a projection?

$
0
0

Hi, I am a freshman at VCUarts and new to Processing. I was wondering if anyone could help me figure out how to make a person be able to interact with a projection and make the images on the screen move? I was planning on using a projected gradient that would move with the motion of people in front of the gradient. I am not sure where to start and was hoping someone could help guide me in the right direction.

interactive screen using live video, changing letters positions

$
0
0

Hello, I'm new to programming with Processing. Trying to make a code for my art bachelor degree. What I want to do: make the interactive screen, filled with letters(text), with a live cam recording what's happening in front, detecting motion. When there is a motion letter that meets motion line(human body contours) flips position with a letter from the side with motion come from. for that as a base, I used raindrops sketch and Shiffman's Example 16-13: Simple motion detection. But now I'm getting a gray screen and letters appear non-stop. Did I mess up something with arrays? should text be put not in a string? I read about Kinect but not sure if it would help at this point. Hope to get hints or something that to do next :]

    import processing.video.*;
    import java.awt.Frame;
    import java.awt.Image;
    import java.text.*;

    Capture cam;
    Letter[][] drops;
    int dropsLength;
    PImage prevFrame;
    int sWidth = 1280;
    int sHeight = 720;

    String inputString = "Įsivaizduokime pasaulį, kur visi viską žino tiksliai, ir niekada neklysta. Niekam nekiltų abejoių, koks bus rytoj oras, kaip išsaugoti tirpstačius ledynus, ar koks visatos dydis. Žvelgiant į krentantį kamuoliuką kiekvienas galėtų pasakyti: -  O šito kamuoliuko kritimo greitis 6,325 m/s. -  Tikrai taip - atsakytų kitas. Ir viskas, daugiau nebebūtų jokių diskusijų, ieškojimų, matavimų. Su absoliučiu žinojimu gyvenimas taptų nebeįdomus, monotoniškas, tokiu atveju net progresas neįmanomas. Kai pradedu taip galvoti, džiaugiuosi nežinojimu, diskusijų galimybe, tiesos ieškojimu. Klaida suvokiama kaip neišvengiamas procesas teisybės ieškojime leidžia drąsiai žengti į praktikos sritį, nebijoti suklysti, o neteisingus procesus paversti progresu, žingsneliu link tikslo. Menininkas nebėra tas genijus, kuris turi sukurti kažką naujo, tarsi nežemiško, keičiančio visą mūsų suvokimą. Jo praktikos esmė eksperimentuoti ir klysti, organizuoti jau esamomis reikšmėmis ir kurti nau";
    char[] inputLetters;
    int dupStrings = 3;   //times to dublicate text
    int k = 800;
    float threshold = 50;

    void settings() {
      size(1280, 720);
    }
    void setup() {
      String[] cameras = Capture.list();

      drops = new Letter[dupStrings][inputString.length()];  //[inputString.length()];

      int wspace = 50;
      inputLetters = new char[inputString.length()];
      splitString();


      // first row hight
      int addLineHeight = 30;
      for (int i = 0; i < dupStrings; i++) {
            for (int j = 0; j < inputLetters.length; j++) {
                 if  (inputLetters[j] < k + wspace ){
                   Letter testLetter = new Letter(inputLetters[j]);
                   testLetter.x = wspace;
                   testLetter.y = addLineHeight;
                   drops[i][j] = testLetter;
                   wspace += 10;   //spaces between letters

                   // new row
                   if (wspace >= sWidth) {
                                wspace = 10;
                               addLineHeight += 40; } // space between

                    }
                           else {
                               addLineHeight += 50;
                                }
                                                                  }
                                               }
    // cam conect//

      if (cameras.length == 0) {
        println("There are no cameras available...");
        size(400, 400);
        exit();
      }
      else {
        cam = new Capture(this, sWidth, sHeight);
        cam.start();
        cam.loadPixels();
        prevFrame = createImage(cam.width, cam.height, RGB);

        size(sWidth, sHeight);
      }
      dropsLength = inputString.length();
    }

    void captureEvent(Capture cam) {
      // Save previous frame for motion detection!!
      prevFrame.copy(cam, 0, 0, cam.width, cam.height, 0, 0, cam.width, cam.height);
      // Before we read the new frame, we always save the previous frame for comparison!```
      prevFrame.updatePixels();  // Read image from the camera
      cam.read();
    }


    void splitString() {

      for (int i = 0; i < inputString.length() ; i++) {
        inputLetters[i] = inputString.charAt(i);
      }
    }

    void draw() {

        loadPixels();
      cam.loadPixels();
      prevFrame.loadPixels();
        // Begin loop to walk through every pixel
      for (int x = 0; x < cam.width; x ++ ) {
        for (int y = 0; y < cam.height; y ++ ) {

          int loc = x + y*cam.width;            // Step 1, what is the 1D pixel location
          color current = cam.pixels[loc];      // Step 2, what is the current color
          color previous = prevFrame.pixels[loc]; // Step 3, what is the previous color

          // Step 4, compare colors (previous vs. current)
          float r1 = red(current);
          float g1 = green(current);
          float b1 = blue(current);
          float r2 = red(previous);
          float g2 = green(previous);
          float b2 = blue(previous);
          float diff = dist(r1, g1, b1, r2, g2, b2);

          // Step 5, How different are the colors?
          // If the color at that pixel has changed, then there is motion at that pixel.
          if (diff > threshold) {
            // If motion, display black
            pixels[loc] = color(0);
          } else {
            // If not, display white
            pixels[loc] = color(255);
          }
        }
      }

       //Responding to the brightness/color of the screen
       for (int i = 0; i < dupStrings; i++) {
        for (int j = 0; j < dropsLength; j++) {

          if (drops[i][j].y < sHeight && drops[i][j].y > 0) {
           int loc = drops[i][j].x + ((drops[i][j].y)-1)*sWidth;
           float bright = brightness(cam.pixels[loc]);
            if (bright > threshold) {
              drops[i][j].dropLetter();
              drops[i][j].upSpeed = 1;
            }
            else {
              if (drops[i][j].y > threshold) {
                int aboveLoc = loc = drops[i][j].x + ((drops[i][j].y)-1)*sWidth;
                float aboveBright = brightness(cam.pixels[aboveLoc]);
                if (aboveBright < threshold) {
                  drops[i][j].liftLetter();
                  drops[i][j].upSpeed = drops[i][j].upSpeed * 5;
                }
              }
            }
          }
          else {
            drops[i][j].dropLetter();
          }

          drops[i][j].drawLetter();
          cam.updatePixels();
        }
      }
    }

    class Letter {
      int x;
      int y;
      int m;
      char textLetter;
      int upSpeed;
      int alpha = 150;
      Letter(char inputText) {
        x = 100;
        y = 100;
        textLetter = inputText;
        textSize(16);
        upSpeed = 1;
      }
      void drawLetter() {
       // if ( m < 1) {
        fill(150, 150, 150 , alpha);
        text(textLetter, x, y);

      }

      void letterFade() {
        alpha -= 5;
        if(alpha <= 0) {
          y = int(random(-350, 0));
          alpha = 255;
        }
      }


      void dropLetter() {
      //  y++;
        if (y > 730) {
          letterFade();
        }
      }

      void liftLetter() {
        int newY = y - upSpeed;
        if (newY >= 0) {
          y = newY;
        }
        else {
          y = 0;
        }
      }
    }

Is there a way to make a Kinect do motion tracking with dancers with particle delay?

$
0
0

For a school project my theater and dance classes want me to create a dance projection wall. I have been able to find essentially no information regarding the subject and was hoping you all on the forums might be able to help. If there is any guidance you can give it would be helpful.


How can I make kinect to recognize the qr code I made?

$
0
0

Are there any ways that I can teach kinect to recognize the qr code I made and trigger some activities I set in Unity?

Kinect V2 as a 3d scanner

Does the Xbox One S power supply work with the Xbox 360 kinect?

$
0
0

I have the xbox 360 kinect, but I can't get a power supply for it. The power supply cable for Xbox One S is available though. Does anyone know if it would work?

The constructor "processing.Kinect2(RGBDepthTest2)" does not exist error

$
0
0

I am using the existing examples of KInect2 from OpenKinect_Processing library and trying to integrate it with my code.

// Daniel Shiffman
// All features test

// https://github.com/shiffman/OpenKinect-for-Processing
// http://shiffman.net/p5/kinect/

import org.openkinect.freenect.*;
import org.openkinect.freenect2.*;
import org.openkinect.processing.*;

class RGBDepthTest2{
  Kinect2 kinect2;

  RGBDepthTest2()
  {
  }

  void setup() {
    size(1024, 848, P2D);

    kinect2 = new Kinect2(this);
    kinect2.initVideo();
    kinect2.initDepth();
    kinect2.initIR();
    kinect2.initRegistered();
    // Start all data
    kinect2.initDevice();
  }


  void draw() {
    background(0);
    image(kinect2.getVideoImage(), 0, 0, kinect2.colorWidth*0.267, kinect2.colorHeight*0.267);
    image(kinect2.getDepthImage(), kinect2.depthWidth, 0);
    image(kinect2.getIrImage(), 0, kinect2.depthHeight);

    image(kinect2.getRegisteredImage(), kinect2.depthWidth, kinect2.depthHeight);
    fill(255);
    text("Framerate: " + (int)(frameRate), 10, 515);
  }
}

I am trying to call this class from the main class by instating it, however I keeping on getting this error. The constructor "processing.Kinect2(RGBDepthTest2)" does not exist.

Please suggest a solution for the same.

Thanks in advance :)

UserCounter for all tracked users (simpleopenni)

$
0
0

Hi everyone! I'm complete beginner to processing, kinect and simpleopenni. I'm sorry for asking stupid question, but please help me :(

I tried to create a usercounter for many times for my program, since I'd like to base on the amount of the tracked user to switch my sketch between two function, yet it did't work at all. ( e.g usercount is 1-20 = function A; usercount is 21-29=switch to function B; usercount is 30-50= function A; ...)

if I count it by getting the number of userID or using kinect.getNumberOfUsers() , it only gets the current detecting user, and the count will go back to 0, or even -1 when the users is out of the scene map.

I'd like to have a user countor accumulating all the users instead of the current ones, what should i do?

I'm using simpleopenni 1.96, processing 2.1 and window sdk 1.8. (OS: window 7) Thank you so much!

OpenNI2 Devices and Processing

$
0
0

Hey guys, I have been trying unsuccessfully playing with the intel real sense and Processing, in the hopes of getting it to behave in a similar fashion to Kinect. I cannot access the data, though I can initialise the device, and Intel is withdrawing support for Processing and Java development it seems.

This leaves me in a bind. I need a depth sensor that is able to be small and mobile, without the need for external power. I know there are a few OpenNI2 compatible devices on the market from Asus (Xtion2) and Structure (Occipital).

Are these based on the same frameworks as Kinect? And are they potentially compatible with Processing?

Any insights greatly appreciated.

Best regards, Matt

URGENT! PLEASE HELP!!!

$
0
0

I'm presenting my graduation project on Wednesday. Everything has been working properly but today my Windows 10 started updating and after that I haven't been able to start the sketch: "Target VM failed to initialize". I'm using Kinect V2 with Open Kinect (basic kinect code from Shiffman maxthreshold). I assume the problem has something to do with libraries since another sketch error said that libraries are missing. However I have installed all libraries and nothing was wrong before the update. Please help, I'm panicking a bit since I need to present my work the day after tomorrow! I'm not enclosing the code since it's really long and there should be any problems with the code. Thank you so much!!


OpenCV Facial Recognition

$
0
0

Looking for any info about doing Facial Recognition in Processing (not to be confused with Facial Detection).

Greg Borenstein's lib here gives basic access to OpenCV (and is great BTW!):

https://github.com/atduskgreg/opencv-processing

It's mentioned in the libs introduction that " In addition to using the wrapped functionality, you can import OpenCV modules and use any of its documented functions".

OpenCV does have a FaceRecognition module, here:

http://docs.opencv.org/3.0-beta/modules/face/doc/facerec/facerec_api.html#FaceRecognizer

If anyone has gone down this path, or knows a general direction to travel, the help would be greatly appreciated.

Multi-Threading Face Detection with OpenCV

$
0
0

Hi All,

As the title suggests I'm trying to do face detection in real time which is too slow when executed directly in draw. Therefore I'd like to implement it within a Runnable class but I'm not having much luck. Can someone point me in the right direction?

Error:

You are trying to draw outside OpenGL's animation thread.
Place all drawing commands in the draw() function, or inside
your own functions as long as they are called from draw(),
but not in event handling functions such as keyPressed()
or mousePressed().
OpenGL error 1282 at top endDraw(): invalid operation
OpenGL error 1282 at bot endDraw(): invalid operation

Code:

import processing.video.*;
import gab.opencv.*;
import java.awt.Rectangle;

Capture video;

OpenCV opencv;
Rectangle[] faces;
boolean detected = false;

FaceThread getFaces;

void setup() {
  size(640, 480);

  video = new Capture(this, width, height);

  faces = new Rectangle[0];

  getFaces = new FaceThread();
  getFaces.start();

  video.start();
}

void draw() {
  background(0);
  PImage img = video;
  img.filter(GRAY);
  image(img, 0, 0, 768, 432);

  if (faces.length > 0) {
    for (int i = 0; i < faces.length; i++) {
      rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);
    }
  }

  if (millis() % 500 == 0) {
    println("starting thread");
    getFaces.start();
  }
}

void mousePressed() {
  println(frameRate);
}

PImage getCanvas(){
 return get();
}

class FaceThread implements Runnable {

  Thread thread;

  public FaceThread() {
  }

  public void start() {
    thread = new Thread(this);
    thread.start();
  }

  public void run() {
    println("New Thread");
    opencv.loadImage(getCanvas());
    faces = opencv.detect();
    println(faces.length);
  }

  public void stop() {
    thread = null;
  }

  public void dispose() {
    stop();
  }
}

What am I missing? java.lang.UnsatisfiedLinkError

$
0
0

I see reference to this error in several places but I cant seem to find a thread anywhere that actually states the fix! any help would be wonderful for a sad n00b. Mac Pro desktop OS x 10.11.6 Processing 3.3 Kinect V2

java.lang.UnsatisfiedLinkError: no Kinect20.Face in java.library.path at processing.opengl.PSurfaceJOGL$2.run(PSurfaceJOGL.java:480) at java.lang.Thread.run(Thread.java:745) A library relies on native code that's not available. Or only works properly when the sketch is run as a 32-bit application.

How to solve Vertices of chain shape are too close together and get an correct shape back?

$
0
0

I am trying to use the users silhouette as mask to show some animation in that mask. I came up to the http://www.creativeapplications.net/processing/kinect-physics-tutorial-for-processing/

I am using Processing and Kinect V1.

In the first place, the code doesn't work anymore that I download straight of the website, but after debugging some days I get it sort of working.

The problem seems to be that the vertex points of the silhouette are not saving correct, because I only see triangle shapes when I stand for the Kinect camera. Also I receive the error: "Vertices of chain shape are too close together".

Master .pde: http://snippi.com/s/25rcvbd Class CustomShape .pde: http://snippi.com/s/2omdfyu Class PolygonBlob .pde: http://snippi.com/s/7zfdqeq

I think the error will come from this part inside the PolygonBlob Class (see notice at PolygonBlob script at Snippi):

   for (int n=0; n<theBlobDetection.getBlobNb (); n++) {
          Blob b = theBlobDetection.getBlob(n);

          if (b != null && b.getEdgeNb() > 100) {
            ArrayList<PVector> contour = new ArrayList<PVector>();
            for (int m=0; m<b.getEdgeNb (); m++) {
              EdgeVertex eA = b.getEdgeVertexA(m);
              EdgeVertex eB = b.getEdgeVertexB(m);
              if (eA != null && eB != null) {

                EdgeVertex fn = b.getEdgeVertexA((m+1) % b.getEdgeNb());
                EdgeVertex fp = b.getEdgeVertexA((max(0, m-1)));

                float dn = dist(eA.x*kinectWidth, eA.y*kinectHeight, fn.x*kinectWidth, fn.y*kinectHeight);
                float dp = dist(eA.x*kinectWidth, eA.y*kinectHeight, fp.x*kinectWidth, fp.y*kinectHeight);

                if (dn > 15 || dp > 15) {
                  if (contour.size() > 0) {
                    contour.add(new PVector(eB.x*kinectWidth, eB.y*kinectHeight));
                    contours.add(contour);
                    contour = new ArrayList<PVector>();
                  } else {
                    contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
                  }
                } else {
                  // comment the next line
                  //contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
                }
              }
            }
          }
}

Hope someone can help me out and help me fix the issue?

Skeleton Tracking, Win10, Kinect v1, Processing 2.2.1, SimpleOpenNI

$
0
0

Hi there!

I really have a big problem with skeleton tracking using SimpleOpenNI. I’m working on Windows 10 with Kinect v1 and SimpleOpenNI. Since two days I’m searching solution in internet, but there is no skeleton tracking code that will work in Processing 2.2.1. There is no errors, just plain white or black box instead of program… I'm using codes from 1. the book ,,Making this see" by Greg Borenstein, 2. trying examples from http://interactivemechanics.com/news/2015/10/kinect-with-processing/ (but I think it's for v2, not sure), 3. reading what to with the problem in this site http://urbanhonking.com/ideasfordozens/2011/02/16/skeleton-tracking-with-kinect-and-processing/ 4. or that one https://fivedots.coe.psu.ac.th/~ad/kinect/installation.html

I'm pretty sure, that I have installed SimpleOpenNI properly, with NITE and SensorKinect. Do you have any advice what is wrong? I would be very grateful for any advice.

Viewing all 530 articles
Browse latest View live