I'm trying to use face detection with the image from a kinect V1. The video input of 640x480 is causing my framerate to drop. I'd like to scale down the resolution of the video input. If I try to write the video input from the Kinect to a PImage and then resize() it, and then feed that to openCV, it simply crops the video image. It also causes some weird ghosting effect. What other ways are there to scale down the video image from a kinect?
Resizing the Kinect video image for openCV
how to make an image on screen appear smaller or bigger with depth info
Hello everyone, I am extremly new to this, I am trying to control an image's size on screen by moving closer or farther to the camera. Anyone can point me in the right direction? I am using Kinect v1, processing 3 and the available kinect4WİnSDK library
How to combine two codes?
I got those two, and want to combine them but really, everything is false... Can someone help me out? first // Example 16-12: Simple background removal
// Click the mouse to memorize a current background image import processing.video.*; import gab.opencv.*;
// Variable for capture device Capture video; OpenCV opencv;
// Saved background PImage backgroundImage;
// How different must a pixel be to be a foreground pixel float threshold = 20;
void setup() { String[] cameras = Capture.list(); //printArray(cameras); size(1280, 720); //video.width = 1280; //video.height = 720; video = new Capture(this, 1280, 720, cameras[76]);
video.start();
opencv = new OpenCV(this, 1280, 720); opencv.startBackgroundSubtraction(5, 3, 0.5);
// Create an empty image the same size as the video backgroundImage = createImage(video.width, video.height, RGB);
}
void captureEvent(Capture video) { // Read image from the camera video.read(); }
void draw() { if (video.available() == true) { video.read(); } image(video, random(width),random(height));
opencv.loadImage(video);
opencv.updateBackground();
opencv.dilate(); opencv.erode();
noFill(); stroke(255, 0, 0); strokeWeight(3); for (Contour contour : opencv.findContours()) { contour.draw();
} }
void mousePressed() {
}
second code to combine with first one /** * Spatiotemporal * by David Muth * * Records a number of video frames into memory, then plays back the video * buffer by turning the time axis into the x-axis and vice versa */
import processing.video.*;
Capture video; int signal = 0;
//the buffer for storing video frames ArrayList frames;
//different program modes for recording and playback int mode = 0; int MODE_NEWBUFFER = 0; int MODE_RECORDING = 1; int MODE_PLAYBACK = 2;
int currentX = 0;
void setup() { size(640, 480);
// This the default video input, see the GettingStartedCapture // example if it creates an error video = new Capture(this, width, height);
// Start capturing the images from the camera
video.start();
}
void captureEvent(Capture c) { c.read();
//create a new buffer in case one is needed if (mode == MODE_NEWBUFFER) { frames = new ArrayList(); mode = MODE_RECORDING; }
//record into the buffer until there are enough frames if (mode == MODE_RECORDING) { //copy the current video frame into an image, so it can be stored in the buffer PImage img = createImage(width, height, RGB); video.loadPixels(); arrayCopy(video.pixels, img.pixels);
frames.add(img);
//in case enough frames have been recorded, switch to playback mode
if (frames.size() >= width) {
mode = MODE_PLAYBACK;
}
} }
void draw() { loadPixels();
//code for the recording mode if (mode == MODE_RECORDING) { //set the image counter to 0 int currentImage = 0;
//begin a loop for displaying pixel columns
for (int x = 0; x < video.width; x++) {
//go through the frame buffer and pick an image using the image counter
if (currentImage < frames.size()) {
PImage img = (PImage)frames.get(currentImage);
//display a pixel column of the current image
if (img != null) {
img.loadPixels();
for (int y = 0; y < video.height; y++) {
pixels[x + y * width] = img.pixels[x + y * video.width];
}
}
currentImage++;
}
else {
break;
}
}
}
if (mode == MODE_PLAYBACK) {
for (int x = 0; x < video.width; x++) {
PImage img = (PImage)frames.get(x);
if (img != null) {
img.loadPixels();
for(int y = 0; y < video.height; y++) {
pixels[x + y * width] = img.pixels[currentX + y * video.width];
}
}
}
currentX++;
if(currentX >= video.width) {
mode = MODE_NEWBUFFER;
//reset the column counter
currentX = 0;
}
}
updatePixels(); }
Registered Image from Kinect v2
Hey everyone,
I have an issue regarding Open Kinect for Processing. The current project we are building consists of background removal using a kinect. For this purpose we use the RGB information for color substraction as well as the depth information. Due to registration issues between the RGB and depth images provided by the kinect, we need to consider the registered image provided by the kinect (i.e. kinect2.getRegisteredImage().get()). That's where we get stuck...
After 2 days searching around to understand how is the registration information stored in the PImage returned by the kinect, we are still stuck. We were expecting the registered image using RGB channels for RGB information and Alpha channel for the depth values, but from our tests, it is not the case (Alpha channel is always either 255 or 0). We couldn't find any documentation on that matter online...
My question is: do you have any information on how we could extract RGB and depth values from the registered PImage? Currently we can only extract RGB values from it....
Thanks a lot for your feedback!
Laurent
how to place rgb camera pixels onto depth camera array V1 Kinect
Processsing 3 Kinect v1
Basically, i've stripped down my code so it's just this, atm it only shows the user when you're in the correct threshold. You go pink as I have made it that color. I want to know how to only show the RGB pixels for the bits that are pink.
Could I also just lay over a black shape where the threshold is not?
Many thanks for reading, any feedback is much appreciated
` import org.openkinect.freenect.*; import org.openkinect.freenect2.*; import org.openkinect.processing.*; import org.openkinect.tests.*;
Kinect kinect;
int kinectWidth = 640;
int kinectHeight = 480;
PImage cam = createImage(640, 480, RGB);
int minThresh = 300;
int maxThresh = 700;
float reScale;
void setup() {
size(640, 480, P3D);
kinect = new Kinect(this);
kinect.enableMirror(true);
kinect.initDepth();
reScale = (float) width / kinectWidth;
}
void draw() {
cam.loadPixels();
int[] depth = kinect.getRawDepth();
for (int x = 0; x < kinect.width; x++) {
for (int y = 0; y < kinect.height; y++) {
int offset = x + y * kinect.width;
int d = depth[offset];
if (d > minThresh && d < maxThresh) {
cam.pixels[offset] = color(255,0,200);
} else{
cam.pixels[offset] = color(0);
}
}
}
cam.updatePixels();
background(255);
image(cam,0,0);
}
`
In urgent need of assistance to help get me started with this idea for my final project
Although a little ambitious I want to create a interactive installation for my final piece in two weeks time. I have devised the idea of formulating algorithms that are pre-programmed in processing to procedurally generate a landscape that is displayed on a screen via a projector. To create some interactivity I have obtained kinect the sensory device used for xbox's that can detect the movement of people and record their position. With the use of this I want the kinect to track the speed of movement in the viewers body, forming sharp jagged terrain when the viewer moves quickly and smoothed and less harsh gradients when the viewer moves subtly. The purpose of the kinect will also serve to capture the direction of the head to allow the viewer to virtually navigate through the landscape depending on where they face there head.
I am a complete novice.
Could someone please help me get the code started off and if you have any useful advice of how to approach the code in processing to enable all these functions that would be much appreciated
Thank you for your time,
Simple, Simple, Simple question (documentation)
I get crazy. I am unable to find a documentation for some LIBs.
Please help me and show me the simple path and steps to find out:
What functions a library contains.
i.e. processing. kinect.*
I want to know, what I can do with it and where I can find the right syntax.
Please excuse. I know, this is a 'stupid' question, but it makes me busy since more than 3 days.
Thanx
Problem combine 2 codes...
I got those two, and want to combine them but really, everything is false... Can someone help me out? first
// Click the mouse to memorize a current background image
import processing.video.*;
import gab.opencv.*;
// Variable for capture device
Capture video;
OpenCV opencv;
// Saved background
PImage backgroundImage;
// How different must a pixel be to be a foreground pixel
float threshold = 20;
void setup() {
String[] cameras = Capture.list();
//printArray(cameras);
size(1280, 720);
//video.width = 1280;
//video.height = 720;
video = new Capture(this, 1280, 720, cameras[76]);
video.start();
opencv = new OpenCV(this, 1280, 720);
opencv.startBackgroundSubtraction(5, 3, 0.5);
// Create an empty image the same size as the video
backgroundImage = createImage(video.width, video.height, RGB);
}
void captureEvent(Capture video) {
// Read image from the camera
video.read();
}
void draw() {
if (video.available() == true) {
video.read();
}
image(video, random(width),random(height));
opencv.loadImage(video);
opencv.updateBackground();
opencv.dilate();
opencv.erode();
noFill();
stroke(255, 0, 0);
strokeWeight(3);
for (Contour contour : opencv.findContours()) {
contour.draw();
}
}
void mousePressed() {
}
** **second code to combine with first one**** **
import processing.video.*;
Capture video;
int signal = 0;
//the buffer for storing video frames
ArrayList frames;
//different program modes for recording and playback
int mode = 0;
int MODE_NEWBUFFER = 0;
int MODE_RECORDING = 1;
int MODE_PLAYBACK = 2;
int currentX = 0;
void setup() {
size(640, 480);
// This the default video input, see the GettingStartedCapture
// example if it creates an error
video = new Capture(this, width, height);
// Start capturing the images from the camera
video.start();
}
void captureEvent(Capture c) {
c.read();
//create a new buffer in case one is needed
if (mode == MODE_NEWBUFFER) {
frames = new ArrayList();
mode = MODE_RECORDING;
}
//record into the buffer until there are enough frames
if (mode == MODE_RECORDING) {
//copy the current video frame into an image, so it can be stored in the buffer
PImage img = createImage(width, height, RGB);
video.loadPixels();
arrayCopy(video.pixels, img.pixels);
frames.add(img);
//in case enough frames have been recorded, switch to playback mode
if (frames.size() >= width) {
mode = MODE_PLAYBACK;
}
}
}
void draw() {
loadPixels();
//code for the recording mode
if (mode == MODE_RECORDING) {
//set the image counter to 0
int currentImage = 0;
//begin a loop for displaying pixel columns
for (int x = 0; x < video.width; x++) {
//go through the frame buffer and pick an image using the image counter
if (currentImage < frames.size()) {
PImage img = (PImage)frames.get(currentImage);
//display a pixel column of the current image
if (img != null) {
img.loadPixels();
for (int y = 0; y < video.height; y++) {
pixels[x + y * width] = img.pixels[x + y * video.width];
}
}
currentImage++;
}
else {
break;
}
}
}
if (mode == MODE_PLAYBACK) {
for (int x = 0; x < video.width; x++) {
PImage img = (PImage)frames.get(x);
if (img != null) {
img.loadPixels();
for(int y = 0; y < video.height; y++) {
pixels[x + y * width] = img.pixels[currentX + y * video.width];
}
}
}
currentX++;
if(currentX >= video.width) {
mode = MODE_NEWBUFFER;
//reset the column counter
currentX = 0;
}
}
updatePixels();
}
Choose a color and play a video for that color (Urgent)
Hello guys, so, I have 4 vídeos and 4 colors and I want to set one color for each video , when i show the color on the webcam it plays a vídeo, can you help me? When the video is over the webcam continues. This is the code wich i want to change, i really need help, final project next week :( PS: Colors will be , green, yellow, red, blue and the videos will be named : video1.mp4,video2.mp4,video3.mp4,video4.p4
Code :
import gab.opencv.*;
import processing.video.*;
import java.awt.Rectangle;
Capture video;
OpenCV opencv;
PImage src;
ArrayList<Contour> contours;
// <1> Set the range of Hue values for our filter
//ArrayList<Integer> colors;
int maxColors = 4;
int[] hues;
int[] colors;
int rangeWidth = 10;
PImage[] outputs;
int colorToChange = -1;
void setup() {
video = new Capture(this, 640, 480);
opencv = new OpenCV(this, video.width, video.height);
contours = new ArrayList<Contour>();
size(640,480, P2D);
// Array for detection colors
colors = new int[maxColors];
hues = new int[maxColors];
outputs = new PImage[maxColors];
video.start();
}
void draw() {
background(150);
if (video.available()) {
video.read();
}
// <2> Load the new frame of our movie in to OpenCV
opencv.loadImage(video);
// Tell OpenCV to use color information
opencv.useColor();
src = opencv.getSnapshot();
// <3> Tell OpenCV to work in HSV color space.
opencv.useColor(HSB);
detectColors();
// Show images
image(src, 0, 0);
for (int i=0; i<outputs.length; i++) {
if (outputs[i] != null) {
image(outputs[i], width-src.width/4, i*src.height/4, src.width/4, src.height/4);
noStroke();
fill(colors[i]);
rect(src.width, i*src.height/4, 30, src.height/4);
}
}
// Print text if new color expected
textSize(20);
stroke(255);
fill(255);
if (colorToChange > -1) {
text("click to change color " + colorToChange, 10, 25);
} else {
text("press key [1-4] to select color", 10, 25);
}
displayContoursBoundingBoxes();
}
//////////////////////
// Detect Functions
//////////////////////
void detectColors() {
for (int i=0; i<hues.length; i++) {
if (hues[i] <= 0) continue;
opencv.loadImage(src);
opencv.useColor(HSB);
// <4> Copy the Hue channel of our image into
// the gray channel, which we process.
opencv.setGray(opencv.getH().clone());
int hueToDetect = hues[i];
//println("index " + i + " - hue to detect: " + hueToDetect);
// <5> Filter the image based on the range of
// hue values that match the object we want to track.
opencv.inRange(hueToDetect-rangeWidth/2, hueToDetect+rangeWidth/2);
//opencv.dilate();
opencv.erode();
// TO DO:
// Add here some image filtering to detect blobs better
// <6> Save the processed image for reference.
outputs[i] = opencv.getSnapshot();
}
// <7> Find contours in our range image.
// Passing 'true' sorts them by descending area.
if (outputs[0] != null) {
opencv.loadImage(outputs[0]);
contours = opencv.findContours(true,true);
}
}
void displayContoursBoundingBoxes() {
for (int i=0; i<contours.size(); i++) {
Contour contour = contours.get(i);
Rectangle r = contour.getBoundingBox();
if (r.width < 20 || r.height < 20)
continue;
stroke(255, 0, 0);
fill(255, 0, 0, 150);
strokeWeight(2);
rect(r.x, r.y, r.width, r.height);
}
}
//////////////////////
// Keyboard / Mouse
//////////////////////
void mousePressed() {
if (colorToChange > -1) {
color c = get(mouseX, mouseY);
println("r: " + red(c) + " g ...
** (java.exe:9372): WARNING **: gstvideo: failed to get caps of pad nat:sink
Hello there!
I have borrowed this code for a school project. It creates a bouncing ball which you can control with you webcam by movements. Sometimes it works just fine, but there seem to be a bug saying
** (java.exe:9372): WARNING **: gstvideo: failed to get caps of pad nat:sink
And then the webcam won't stream. I stumbled upon a discussion with a similar problem, saying using loadPixels() and updatePixels - yet it did not remove the warning.
Below is the code
BackgroundBouncer.pde:
import processing.video.*;
import gab.opencv.*;
import java.awt.Rectangle;
Capture video;
OpenCV opencv;
Ball b;
void setup() {
size(640,480, P3D); //setup screen size and uses OpenGL gfx driver
frameRate(30);
ellipseMode(RADIUS); //set ellipse mode to radius (center point)
video = new Capture(this, width/2, height/2);
opencv = new OpenCV(this, width/2, height/2);
b = new Ball();
video.start();
video.read();
opencv.startBackgroundSubtraction(50,3,0.5); //detects moving objects
}
void draw() {
clear();
scale(2);
loadPixels();
opencv.loadImage(video); //
opencv.flip(OpenCV.HORIZONTAL);
opencv.updateBackground();
opencv.calculateOpticalFlow(); //apparent motion between two frames caused by moving object or camera
opencv.dilate(); //makes the image wider and look at neighbour pixels shape over which minimum is taken
opencv.erode(); //erodes image and look at neighbour pixels shape over which minimum is taken
noFill();stroke(255,0,0);
strokeWeight(1);
image(opencv.getOutput(), 0, 0);
for (Contour c : opencv.findContours()) {
Contour hull = c.getConvexHull();
Rectangle box = hull.getBoundingBox();
b.strike(c, opencv);
}
b.move();
reflect(b);
drawBall(b);
updatePixels();
}
void keyPressed() {
b.position = new PVector(120, 120);
b.momentum = new PVector(0, 0);
}
void captureEvent(Capture c) {
c.read();
}
And the Ball.pde:
float decayRate = 0.9; //how much decay
float scalingFactor = 7; //how big a ball
PVector gravity = new PVector(0, 0.5); //how heavy
class Ball {
public Ball() {
size = 25;
shade = #FF0000; //red ball
position = new PVector(120, 120); //start position
momentum = new PVector(0, 0); //a moment of stillness
}
public PVector position;
public PVector momentum;
public float size;
public color shade;
public void move() {
position.add(momentum);
momentum.add(gravity);
momentum.mult(decayRate);
}
public void strike(Contour c, OpenCV opencv) {
for (PVector p : c.getPoints()) {
if (p.dist(position) <= size) {
Rectangle box = c.getBoundingBox();
PVector flow = opencv.getAverageFlowInRegion(box.x, box.y, box.width, box.height);
flow.mult(scalingFactor);
momentum.add(flow);
return;
}
}
}
}
void drawBall(Ball b) {
fill(b.shade);
ellipse(b.position.x, b.position.y,
b.size, b.size);
}
void reflect(Ball b) {
if (b.position.x - b.size <= 0) {
b.position.x = b.size;
b.momentum.x *= -1;
} else if (b.position.x + b.size > width/2) {
b.position.x = width/2 - b.size;
b.momentum.x *= -1;
}
if (b.position.y - b.size <= 0) {
b.position.y = b.size;
b.momentum.y *= -1;
} else if (b.position.y + b.size > height/2) {
b.position.y = height/2 - b.size;
b.momentum.y *= -1;
}
}
Any help or tips is welcomed! Thanks in advance.
Face recognition saving img
Does somebody know a sketch or way to save the bounding box of a face detection program/software/sketch. I want to safe the image in the bounding box (the face). Googlefu leaves me wondering around.
Kinect, Processing, OpenNI
Hi, Pls HELP! Can someone offer any tips on how to download the openNI software on the newest version of OS? We want to make something beautiful with processing and kinect. Love
blend() and filter() significantly slow down 1080p Video FPS to 7; Any solutions?
//hey, I'm probably not doing this right //is it cause I'm using the context of PGraphics and PImage mixed up?? //IM GETTING 7 FPS W/ I7 7700K AND 1070 sli //CPU AND GPU ARE NOT BEING PUSHED ///NEED HELP!!!!
import java.util.ArrayList; import KinectPV2.KJoint; import KinectPV2.*;
KinectPV2 kinect;
import processing.video.*;
Movie coral; PGraphics feedback;
void setup() { fullScreen(P2D, 2); feedback = createGraphics(1920,1080,P2D); background(0); //blendMode(ADD);
//Video stuff coral = new Movie(this, "CoralReef.mp4"); coral.loop();
//Kinect stuff kinect = new KinectPV2(this); kinect.enableBodyTrackImg(true); kinect.enableSkeletonDepthMap(true); kinect.init(); }
void movieEvent(Movie m) { m.read(); }
PImage kinectSil;
void draw() {
kinectSil = kinect.getBodyTrackImage();
kinectSil.filter(INVERT); kinectSil.loadPixels(); for (int i = 0; i < kinectSil.pixels.length; i++) { if (kinectSil.pixels[i] == color(0)) { kinectSil.pixels[i] = color(0, 0); } else { kinectSil.pixels[i] = color(255,255); } } kinectSil.updatePixels();
feedback.beginDraw(); feedback.image(kinectSil,0,0,1920,1080); feedback.blend(coral, 0, 0, coral.width, coral.height, 0, 0, coral.width, coral.height, MULTIPLY); feedback.filter(THRESHOLD); feedback.endDraw(); tint(255,10); image(coral,0,0,width,height); blend(feedback,0,0,feedback.width,feedback.height, 0, 0, width, height, LIGHTEST); fill(45); rect(0,0,150,54); textSize(30); fill(255); text(frameRate, 20,40); }
Can I extract the point data I get from the Kinect to be used in after effects?
Looking to somehow export the data the Kinect is using into a format that I can use in After Effects. Is there anyway to use the point data to export an obj sequence?
Flip Webcam opencv
Hi, as you can see from the code I was able to reverse the webcam, but how do I make the opencv with the webcam flipped? I would like to do face detection on the flip webcam
`import processing.video.*; import gab.opencv.*; import java.awt.Rectangle;
Capture cam; OpenCV opencv;
void setup() { size(640, 480);
cam = new Capture(this, width, height, 30);
opencv = new OpenCV(this, width, height);
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);
cam.start();
}
void draw() { if (cam.available() == true) { cam.read(); }
pushMatrix(); scale(-1, 1); translate(-cam.width, 0); image(cam, 0, 0); popMatrix();
opencv.loadImage(cam); Rectangle[] faces = opencv.detect();
noFill(); stroke(0, 255, 0); strokeWeight(3); for (int i = 0; i < faces.length; i++) { rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); } }`
Use a value within pushMatrix () and popMatrix () outside it
Hi, I would like to use a value created within pushMatrix () and popMatrix (), outside this block. In my case, I would use the value generated by l = faces [0] .x; In rect outside pushMatrix () and popMatrix (). How can I do?
CAN Kinect Physics Code Examples
I've been getting some notifications that the code examples for the CAN Kinect Physics tutorial no longer work. This is because the code formatting plugin was removed and because the code is severely outdated. For future generations, I will post the code examples below. This code is provided as is, since I've stopped supporting this long ago (as I haven't really used the Kinect since writing the tutorial). Perhaps those still interesting in this code, can gather here and if needed, post updated versions of these code example, that run in more recent versions of Processing and relevant libraries. I still get mails about this tutorial regularly and I will be pointing everyone to this thread. Good luck & happy, creative coding! :)
EDIT 30.05.2014
It seems the forum also has problems correctly displaying the code or something else went wrong. Either way I am providing a download link to the original three code examples (file size: 16 KB). Once again, I can and will no longer provide any support whatsoever on these code examples as I've stopped using the Kinect two years ago. Of course feel free to share updated code examples via this thread.
LINK TO A ZIP-FILE CONTAINING THE ORIGINAL CODE EXAMPLES:
https://dl.dropboxusercontent.com/u/94122292/CANKinectPhysics.zip
interactive screen using live video, changing letters positions
Hello, I'm new to programming with Processing. Trying to make a code for my art bachelor degree. What I want to do: make the interactive screen, filled with letters(text), with a live cam recording what's happening in front, detecting motion. When there is a motion letter that meets motion line(human body contours) flips position with a letter from the side with motion come from. for that as a base, I used raindrops sketch and Shiffman's Example 16-13: Simple motion detection. But now I'm getting a gray screen and letters appear non-stop. Did I mess up something with arrays? should text be put not in a string? I read about Kinect but not sure if it would help at this point. Hope to get hints or something that to do next :]
import processing.video.*;
import java.awt.Frame;
import java.awt.Image;
import java.text.*;
Capture cam;
Letter[][] drops;
int dropsLength;
PImage prevFrame;
int sWidth = 1280;
int sHeight = 720;
String inputString = "Įsivaizduokime pasaulį, kur visi viską žino tiksliai, ir niekada neklysta. Niekam nekiltų abejoių, koks bus rytoj oras, kaip išsaugoti tirpstačius ledynus, ar koks visatos dydis. Žvelgiant į krentantį kamuoliuką kiekvienas galėtų pasakyti: - O šito kamuoliuko kritimo greitis 6,325 m/s. - Tikrai taip - atsakytų kitas. Ir viskas, daugiau nebebūtų jokių diskusijų, ieškojimų, matavimų. Su absoliučiu žinojimu gyvenimas taptų nebeįdomus, monotoniškas, tokiu atveju net progresas neįmanomas. Kai pradedu taip galvoti, džiaugiuosi nežinojimu, diskusijų galimybe, tiesos ieškojimu. Klaida suvokiama kaip neišvengiamas procesas teisybės ieškojime leidžia drąsiai žengti į praktikos sritį, nebijoti suklysti, o neteisingus procesus paversti progresu, žingsneliu link tikslo. Menininkas nebėra tas genijus, kuris turi sukurti kažką naujo, tarsi nežemiško, keičiančio visą mūsų suvokimą. Jo praktikos esmė eksperimentuoti ir klysti, organizuoti jau esamomis reikšmėmis ir kurti nau";
char[] inputLetters;
int dupStrings = 3; //times to dublicate text
int k = 800;
float threshold = 50;
void settings() {
size(1280, 720);
}
void setup() {
String[] cameras = Capture.list();
drops = new Letter[dupStrings][inputString.length()]; //[inputString.length()];
int wspace = 50;
inputLetters = new char[inputString.length()];
splitString();
// first row hight
int addLineHeight = 30;
for (int i = 0; i < dupStrings; i++) {
for (int j = 0; j < inputLetters.length; j++) {
if (inputLetters[j] < k + wspace ){
Letter testLetter = new Letter(inputLetters[j]);
testLetter.x = wspace;
testLetter.y = addLineHeight;
drops[i][j] = testLetter;
wspace += 10; //spaces between letters
// new row
if (wspace >= sWidth) {
wspace = 10;
addLineHeight += 40; } // space between
}
else {
addLineHeight += 50;
}
}
}
// cam conect//
if (cameras.length == 0) {
println("There are no cameras available...");
size(400, 400);
exit();
}
else {
cam = new Capture(this, sWidth, sHeight);
cam.start();
cam.loadPixels();
prevFrame = createImage(cam.width, cam.height, RGB);
size(sWidth, sHeight);
}
dropsLength = inputString.length();
}
void captureEvent(Capture cam) {
// Save previous frame for motion detection!!
prevFrame.copy(cam, 0, 0, cam.width, cam.height, 0, 0, cam.width, cam.height);
// Before we read the new frame, we always save the previous frame for comparison!```
prevFrame.updatePixels(); // Read image from the camera
cam.read();
}
void splitString() {
for (int i = 0; i < inputString.length() ; i++) {
inputLetters[i] = inputString.charAt(i);
}
}
void draw() {
loadPixels();
cam.loadPixels();
prevFrame.loadPixels();
// Begin loop to walk through every pixel
for (int x = 0; x < cam.width; x ++ ) {
for (int y = 0; y < cam.height; y ++ ) {
int loc = x + y*cam.width; // Step 1, what is the 1D pixel location
color current = cam.pixels[loc]; // Step 2, what is the current color
color previous = prevFrame.pixels[loc]; // Step 3, what is the previous color
// Step 4, compare colors (previous vs. current)
float r1 = red(current);
float g1 = green(current);
float b1 = blue(current);
float r2 = red(previous);
float g2 = green(previous);
float b2 = blue(previous);
float diff = dist(r1, g1, b1, r2, g2, b2);
// Step 5, How different are the colors?
// If the color at that pixel has changed, then there is motion at that pixel.
if (diff > threshold) {
// If motion, display black
pixels[loc] = color(0);
} else {
// If not, display white
pixels[loc] = color(255);
}
}
}
//Responding to the brightness/color of the screen
for (int i = 0; i < dupStrings; i++) {
for (int j = 0; j < dropsLength; j++) {
if (drops[i][j].y < sHeight && drops[i][j].y > 0) {
int loc = drops[i][j].x + ((drops[i][j].y)-1)*sWidth;
float bright = brightness(cam.pixels[loc]);
if (bright > threshold) {
drops[i][j].dropLetter();
drops[i][j].upSpeed = 1;
}
else {
if (drops[i][j].y > threshold) {
int aboveLoc = loc = drops[i][j].x + ((drops[i][j].y)-1)*sWidth;
float aboveBright = brightness(cam.pixels[aboveLoc]);
if (aboveBright < threshold) {
drops[i][j].liftLetter();
drops[i][j].upSpeed = drops[i][j].upSpeed * 5;
}
}
}
}
else {
drops[i][j].dropLetter();
}
drops[i][j].drawLetter();
cam.updatePixels();
}
}
}
class Letter {
int x;
int y;
int m;
char textLetter;
int upSpeed;
int alpha = 150;
Letter(char inputText) {
x = 100;
y = 100;
textLetter = inputText;
textSize(16);
upSpeed = 1;
}
void drawLetter() {
// if ( m < 1) {
fill(150, 150, 150 , alpha);
text(textLetter, x, y);
}
void letterFade() {
alpha -= 5;
if(alpha <= 0) {
y = int(random(-350, 0));
alpha = 255;
}
}
void dropLetter() {
// y++;
if (y > 730) {
letterFade();
}
}
void liftLetter() {
int newY = y - upSpeed;
if (newY >= 0) {
y = newY;
}
else {
y = 0;
}
}
}
Kinect v2 How to shorten the cable?
On the Kinect2. It is great and there are nice libraries for it.
But that cable is so bad... Way too long and too many proprietary connectors. For installation work or tight spaces it is real bad. Anyone who has used one must've come to to a similar conclusion.
Has anyone seen, crafted, or heard of a good way to make it shorter or use less parts? Or even use more standard connectors?
Is there a developer edition that is easier to use?
It would be great to publish a way to do this since there doesn't seem to be one out there yet.
In a room of 5x5 meters I have to find a person's position (x,y).
Hi,
In a room of 5x5 meters, I have to determine the position (x,y) of people. I tried with HC-SR05 sensors but they are not accurate, they are wrong with off-scale values. Can I use kinect? One is enough? Is there a code that has addressed this problem?
Thanks for your help.