项目作者: ashislaha

项目描述 :
Detect the shape of drawing objects (classes - line, triangle, rectangle, pentagon, Hexagon, circle) and draw in Augmented Reality.
高级语言: Swift
项目地址: git://github.com/ashislaha/Shape-Detection-in-AR.git
创建时间: 2017-09-29T09:01:32Z
项目社区:https://github.com/ashislaha/Shape-Detection-in-AR

开源协议:

下载


Shape Detection and Drawing in Augmented Reality(AR)

Detect the shape of drawing objects (classes - line, triangle, rectangle, pentagon, Hexagon, circle) and draw in Augmented Reality.
Also mention the shape type along with drawing.

Like : “L” - Line, “T” - Triangle , “R” - Rectangle, “C” - Circle, “P” - Pentagon, “H” - Hexagon

Input Image :

img_2662

Edge Detected Image :

img_0225

Find Contour & Fill it for visualization :

img_0226

Create Scene graph :

img_4427d21af932-1

Example :

2
4

Basic Steps :

step 1 : Create a mlmodel for Edge Detection (Generic type)

step 2 : Take the image from ARFrame & idenfify edges using edge_detection.mlmodel

step 3 : Find out the Contours in the edge_detected image & calculate the approximation points using openCV.

step 4 : Figure the Shapes with it’s image co-ordinates from Approximation points

step 5 : Map the image co-ordinates of shapes into AR-world co-ordinates

step 6 : Render Scene Graph

In the project, pod is not installed, So please do, “$pod install” before running the project.


Create an Edge Detection CoreML model


Original Caffe Model : http://vcl.ucsd.edu/hed/hed_pretrained_bsds.caffemodel

The Github project is : https://github.com/s9xie/hed

Download Edge_detection CoreML model(58MB) from : https://drive.google.com/drive/folders/0B0QC-w3ZqaT1ZEtpSG5HOE5VWEk which contains 6 different type of Outputs.

I am using the Side-out of original model (dsn3 output) only to reduce the space complexity.

(virtualenv2.7) C02QP68UG8WP:CoreML creation ashis.laha$ python mlmodel_converter.py

================= Starting Conversion from Caffe to CoreML ======================

Layer 0: Type: ‘Input’, Name: ‘input’. Output(s): ‘data’.

Ignoring batch size and retaining only the trailing 3 dimensions for conversion.

Layer 1: Type: ‘Convolution’, Name: ‘conv1_1’. Input(s): ‘data’. Output(s): ‘conv1_1’.

Layer 2: Type: ‘ReLU’, Name: ‘relu1_1’. Input(s): ‘conv1_1’. Output(s): ‘conv1_1’.

Layer 3: Type: ‘Convolution’, Name: ‘conv1_2’. Input(s): ‘conv1_1’. Output(s): ‘conv1_2’.

Layer 4: Type: ‘ReLU’, Name: ‘relu1_2’. Input(s): ‘conv1_2’. Output(s): ‘conv1_2’.

Layer 5: Type: ‘Pooling’, Name: ‘pool1’. Input(s): ‘conv1_2’. Output(s): ‘pool1’.

Layer 6: Type: ‘Convolution’, Name: ‘conv2_1’. Input(s): ‘pool1’. Output(s): ‘conv2_1’.

Layer 7: Type: ‘ReLU’, Name: ‘relu2_1’. Input(s): ‘conv2_1’. Output(s): ‘conv2_1’.

Layer 8: Type: ‘Convolution’, Name: ‘conv2_2’. Input(s): ‘conv2_1’. Output(s): ‘conv2_2’.

Layer 9: Type: ‘ReLU’, Name: ‘relu2_2’. Input(s): ‘conv2_2’. Output(s): ‘conv2_2’.

Layer 10: Type: ‘Pooling’, Name: ‘pool2’. Input(s): ‘conv2_2’. Output(s): ‘pool2’.

Layer 11: Type: ‘Convolution’, Name: ‘conv3_1’. Input(s): ‘pool2’. Output(s): ‘conv3_1’.

Layer 12: Type: ‘ReLU’, Name: ‘relu3_1’. Input(s): ‘conv3_1’. Output(s): ‘conv3_1’.

Layer 13: Type: ‘Convolution’, Name: ‘conv3_2’. Input(s): ‘conv3_1’. Output(s): ‘conv3_2’.

Layer 14: Type: ‘ReLU’, Name: ‘relu3_2’. Input(s): ‘conv3_2’. Output(s): ‘conv3_2’.

Layer 15: Type: ‘Convolution’, Name: ‘conv3_3’. Input(s): ‘conv3_2’. Output(s): ‘conv3_3’.

Layer 16: Type: ‘ReLU’, Name: ‘relu3_3’. Input(s): ‘conv3_3’. Output(s): ‘conv3_3’.

Layer 17: Type: ‘Convolution’, Name: ‘score-dsn3’. Input(s): ‘conv3_3’. Output(s): ‘score-dsn3’.

Layer 18: Type: ‘Deconvolution’, Name: ‘upsample_4’. Input(s): ‘score-dsn3’. Output(s): ‘score-dsn3-up’.

Layer 19: Type: ‘Crop’, Name: ‘crop’. Input(s): ‘score-dsn3-up’, ‘data’. Output(s): ‘upscore-dsn3’.

================= Summary of the conversion: ===================================
Detected input(s) and shape(s) (ignoring batch size):

‘data’ : 3, 500, 500

Network Input name(s): ‘data’.

Network Output name(s): ‘upscore-dsn3’.

input {
name: “data”
shortDescription: “Input image to be edge-detected. Must be exactly 500x500 pixels.”
type {
imageType {
width: 500
height: 500
colorSpace: BGR
}
}
}

output {
name: “upscore-dsn3”
type {
multiArrayType {
dataType: DOUBLE
}
}
}

metadata {
shortDescription: “Holistically-Nested Edge Detection. https://github.com/s9xie/hed
author: “Original paper: Xie, Saining and Tu, Zhuowen. Caffe implementation: Yangqing Jia. CoreML port: Ashis Laha”
license: “Unknown”
}


Use the CoreML Model for detecting Edge of ARFrame captured Camera Image

img_2662

img_0225

Open CV framework added :

step 1 : create pod file with : pod ‘OpenCV’

step 2 : Create a bridging header

  1. Create an objective-c file from Cocoa-touch class
  2. name it - OpenCVWrapper
  3. Xcode is smart and proposes to create a bridging header. Click on Create Bridging Header.

step 3 : Configure the bridging header ($project_name-Bridging-Header.h)

  1. #import "OpenCVWrapper.h" in the bridging header

step 4 : Change to Objective-c++

  1. change from OpenCVWrapper.m to OpenCVWrapper.mm

step 5 : Importing opencv

  1. #import <opencv2/opencv.hpp>
  2. #import "OpenCVWrapper.h"
  3. into OpenCVWrapper.mm file.

NOTED : You will get ERROR : enum { NO, FEATHER, MULTI_BAND }; because of “NO” enum name. #import above all other imports will resolve the issue.

Step 6 : Write a test code

  1. In OpenCVWrapper.h —> -(void) isOpenCVWorking;
  2. In OpenCVWrapper.mm —> @Implementation
  3. -(void) isOpenCVWorking {
  4. NSLog(@"It's working");
  5. }
  6. @end

AND call this from Swift class like :

  1. let openCVWrapper = OpenCVWrapper()
  2. openCVWrapper.isOpenCVWorking()

It will generate Output : “It’s working”

Finding Shape :

step1 : convert image into CV Matrix

  1. +(cv::Mat)CVMatFromImage:(UIImage *)image {
  2. CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
  3. size_t numberOfComponents = CGColorSpaceGetNumberOfComponents(colorSpace);
  4. CGFloat cols = image.size.width;
  5. CGFloat rows = image.size.height;
  6. cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
  7. CGBitmapInfo bitmapInfo = kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault;
  8. // check whether the UIImage is greyscale already
  9. if (numberOfComponents == 1){
  10. cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels
  11. bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
  12. }
  13. CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
  14. cols, // Width of bitmap
  15. rows, // Height of bitmap
  16. 8, // Bits per component
  17. cvMat.step[0], // Bytes per row
  18. colorSpace, // Colorspace
  19. bitmapInfo); // Bitmap info flags
  20. CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
  21. CGContextRelease(contextRef);
  22. return cvMat;

}

step 2 : apply Morphology Transformations

step 3 : Find Contour from Image

step 4 : Calculate Approximate points from Contour

step 5 : Based on Approximation size, define the shape

step 6 : Retrieve the Positions (co-ordinates), Radius, Center of Circle & other shapes

  1. -(cv::Mat) shapeDetection :(UIImage *)image { // image is the result of Edge detection, it's in gray scale.
  2. /*
  3. // Convert to grayscale
  4. cv::Mat gray;
  5. cv::cvtColor(src, gray, CV_BGR2GRAY);
  6. // Convert to binary image using Canny
  7. cv::Mat bw;
  8. cv::Canny(gray, bw, 0, 50, 5);
  9. imageView.image = [UIImage fromCVMat:gray];
  10. */
  11. cv::Mat cameraFeed = [OpenCVWrapper CVMatFromImage:image];
  12. std::vector< std::vector<cv::Point> > contours;
  13. std::vector<cv::Vec4i> hierarchy;
  14. // before applying contour finding, apply Morphology Transformations
  15. // Closing the image (Method-1)
  16. cv:: Mat bw2;
  17. cv:: Mat erodedBW2;
  18. cv:: Mat se = getStructuringElement(0, cv::Size(5,5));
  19. cv::dilate(cameraFeed, bw2, se);
  20. cv::erode(bw2, erodedBW2, se);
  21. // Closing the image (Method-2)
  22. cv::morphologyEx(cameraFeed, erodedBW2, cv::MORPH_CLOSE, se);
  23. // Find contour
  24. findContours( cameraFeed, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
  25. bool objectFound = false;
  26. if (hierarchy.size() > 0) {
  27. for (int index = 0; index >= 0; index = hierarchy[index][0]) {
  28. cv::Moments moment = moments((cv::Mat)contours[index]);
  29. double area = moment.m00;
  30. objectFound = (area > 100)? true : false;
  31. }
  32. //let user know you found an object
  33. if(objectFound ==true){
  34. for(int i=0; i < contours.size() ; i++) {
  35. cv::drawContours(cameraFeed,contours,i,cvScalar(80,255,255),CV_FILLED);
  36. }
  37. }
  38. // let's infer the shape from contours , calculate approx length of contours
  39. std::vector<cv::Point> approx;
  40. for(int i = 0; i < contours.size(); i++) {
  41. cv::approxPolyDP(cv::Mat(contours[i]), approx, cv::arcLength(cv::Mat(contours[i]), true)*0.02, true);
  42. // Skip small
  43. if (!(std::fabs(cv::contourArea(contours[i])) < 100)) { // && cv::isContourConvex(approx)
  44. printf("\n\n\n .......Area : %.0f\t", std::fabs(cv::contourArea(contours[i])));
  45. cv::Point2f center;
  46. float radius = 0.0;
  47. NSString * shape = @"";
  48. switch (approx.size()) {
  49. case 2: // line
  50. printf("Line");
  51. shape = @"line";
  52. case 3: // Triangle
  53. printf("Triangle");
  54. shape = @"triangle";
  55. break;
  56. case 4: // Rectangle
  57. printf("Rectangle");
  58. shape = @"rectangle";
  59. break;
  60. case 5: // Pentagon
  61. printf("Pentagon");
  62. shape = @"pentagon";
  63. break;
  64. case 6: //Hexagon
  65. printf("Hexagon");
  66. shape = @"hexagon";
  67. break;
  68. default: // circle
  69. printf("circle \t");
  70. shape = @"circle";
  71. cv::minEnclosingCircle(cv::Mat(contours[i]), center, radius);
  72. printf("Approx size : %ld , radius = %.1f",approx.size(),radius);
  73. }
  74. NSMutableArray * positions = [[NSMutableArray alloc] init];
  75. if ([shape isEqual:@"circle"]) {
  76. NSDictionary * dict = @{ @"radius": [NSNumber numberWithFloat:radius],
  77. @"center.x":[NSNumber numberWithFloat:center.x],
  78. @"center.y":[NSNumber numberWithFloat:center.y]
  79. };
  80. [positions addObject:dict];
  81. }
  82. for (int j = 0; j < approx.size(); j++) {
  83. NSDictionary * dict = @{ @"x":[NSNumber numberWithInt:approx[j].x], @"y":[NSNumber numberWithInt:approx[j].y]};
  84. [positions addObject:dict];
  85. }
  86. [self.shapesResults addObject:@{shape:positions}]; // update the dictionary
  87. }
  88. }
  89. }
  90. return cameraFeed;

}

step 7 : Convert cv::Matrix into UIImage again

  1. +(UIImage *)ImageFromCVMat:(cv::Mat)cvMat {
  2. NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
  3. CGColorSpaceRef colorSpace;
  4. CGBitmapInfo bitmapInfo;
  5. if (cvMat.elemSize() == 1) {
  6. colorSpace = CGColorSpaceCreateDeviceGray();
  7. bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
  8. } else {
  9. colorSpace = CGColorSpaceCreateDeviceRGB();
  10. bitmapInfo = kCGBitmapByteOrder32Little | (cvMat.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst);
  11. }
  12. CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
  13. // Creating CGImage from cv::Mat
  14. CGImageRef imageRef = CGImageCreate(
  15. cvMat.cols, //width
  16. cvMat.rows, //height
  17. 8, //bits per component
  18. 8 * cvMat.elemSize(), //bits per pixel
  19. cvMat.step[0], //bytesPerRow
  20. colorSpace, //colorspace
  21. bitmapInfo, //bitmap info
  22. provider, //CGDataProviderRef
  23. NULL, //decode
  24. false, //should interpolate
  25. kCGRenderingIntentDefault //intent
  26. );
  27. // Getting UIImage from CGImage
  28. UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
  29. CGImageRelease(imageRef);
  30. CGDataProviderRelease(provider);
  31. CGColorSpaceRelease(colorSpace);
  32. return finalImage;

}

Step 8 : Save the result for Visualization

img_0197
img_0196

  1. cv::Mat cameraFeed = [self shapeDetection:image];
  2. UIImage * result = [OpenCVWrapper ImageFromCVMat:cameraFeed];
  3. // save it into photo-galary
  4. UIImage * rotatedImage = [[UIImage alloc] initWithCGImage:[result CGImage] scale:1.0 orientation:UIImageOrientationRight];
  5. UIImageWriteToSavedPhotosAlbum(rotatedImage, self, nil, nil);

Co-ordinate Mapping & SCNNode Create :

img_4427d21af932-1

step 1 : Create A Straight Line :

  1. class func createline(from : SCNVector3 , to : SCNVector3) -> SCNNode { // Z is static
  2. // calculate Angle
  3. let dx = from.x - to.x
  4. let dy = (from.y - to.y)
  5. var theta = atan(Double(dy/dx))
  6. if theta == .nan {
  7. theta = 3.14159265358979 / 2 // 90 Degree
  8. }
  9. //Create Node
  10. let width = CGFloat(sqrt( dx*dx + dy*dy ))
  11. let height : CGFloat = 0.01
  12. let length : CGFloat = 0.08
  13. let chamferRadius : CGFloat = 0.01
  14. let route = SCNBox(width: width, height: height, length: length, chamferRadius: chamferRadius)
  15. route.firstMaterial?.diffuse.contents = UIColor.getRandomColor()
  16. let midPosition = SCNVector3Make((from.x+to.x)/2, (from.y+to.y)/2,0)
  17. let node = SCNNode(geometry: route)
  18. node.position = midPosition
  19. node.rotation = SCNVector4Make(0, 0, 1, Float(theta)) // along Z axis
  20. return node
  21. }

step 2 : Create A Circle :

  1. class func createCircle(center : SCNVector3, radius : CGFloat) -> SCNNode {
  2. var geometry : SCNGeometry!
  3. geometry = SCNCylinder(radius: radius, height: 0.01)
  4. geometry.firstMaterial?.diffuse.contents = UIColor.getRandomColor()
  5. geometry.firstMaterial?.specular.contents = UIColor.getRandomColor()
  6. let node = SCNNode(geometry: geometry)
  7. node.position = center
  8. node.rotation = SCNVector4Make(1, 0, 0, Float(Double.pi/2)) // along X axis
  9. return node
  10. }

Step 3 : Create a Boundary :

  1. class func boundaryNode() -> SCNNode {
  2. let node = SCNNode()
  3. let points : [(Float,Float)] = [(0.0,0.0),(0.5,0.0), (0.5,0.5), (0.0,0.5)]
  4. for i in 0..<4 {
  5. let x1 = points[i].0
  6. let y1 = points[i].1
  7. let x2 = points[(i+1)%points.count].0
  8. let y2 = points[(i+1)%points.count].1
  9. let from = SCNVector3Make(x1,y1,0)
  10. let to = SCNVector3Make(x2,y2,0)
  11. node.addChildNode(SceneNodeCreator.createline(from: from, to: to))
  12. }
  13. return node
  14. }

Step 4 : Map from Image Co-ordinates into AR-Cordinates :

The Image Co-ordinates looks like :

  1. .......Area : 9656 Triangle
  2. .......Area : 17871 Rectangle
  3. .......Area : 9368 circle Approx size : 8 , radius = 76.6
  4. .......Area : 3100 Rectangle
  5. Shape : triangle Values : (
  6. { x = 198; y = 255; },
  7. { x = 119; y = 373; },
  8. { x = 208; y = 485; })
  9. Shape : rectangle Values : (
  10. { x = 303; y = 128; },
  11. { x = 231; y = 162; },
  12. { x = 247; y = 367; },
  13. { x = 330; y = 349; })
  14. Shape : circle Values : (
  15. { "center.x" = 151; "center.y" = "106.5523"; radius = "76.61115"; },
  16. { x = 148; y = 30; },
  17. { x = 115; y = 77; },
  18. { x = 112; y = 118; },
  19. { x = 127; y = 169; },
  20. { x = 156; y = 183; },
  21. { x = 183; y = 152; },
  22. { x = 191; y = 95; },
  23. { x = 186; y = 60; })
  24. Shape : rectangle Values : (
  25. { x = 499; y = 0; },
  26. { x = 2; y = 0; },
  27. { x = 0; y = 499; },
  28. { x = 5; y = 8; })

The convertion function :

  1. class func getSceneNode(shapreResults : [[String : Any]] ) -> SCNScene { // input is array of dictionary
  2. let scene = SCNScene()
  3. let convertionRatio : Float = 1000.0
  4. let imageWidth : Int = 499
  5. let xMin = 10
  6. let xMax = 490
  7. for eachShape in shapreResults {
  8. if let dictionary = eachShape.first {
  9. let values = dictionary.value as! [[String : Any]]
  10. switch dictionary.key {
  11. case "circle" :
  12. if let circleParams = values.first as? [String : Float] {
  13. let x = circleParams["center.x"] ?? 0.0
  14. let y = circleParams["center.y"] ?? 0.0
  15. let radius = circleParams["radius"] ?? 0.0
  16. let center = SCNVector3Make(Float(Float(imageWidth)-y)/convertionRatio+SceneNodeCreator.windowRoot.x, Float(Float(imageWidth)-x)/convertionRatio+SceneNodeCreator.windowRoot.y, SceneNodeCreator.z)
  17. scene.rootNode.addChildNode(SceneNodeCreator.createCircle(center: center, radius: CGFloat(radius/convertionRatio)))
  18. // adding text
  19. var textPosition = center
  20. textPosition.y = textPosition.y + (radius/convertionRatio) + 0.01
  21. scene.rootNode.addChildNode(SceneNodeCreator.create3DText("C", position: textPosition))
  22. }
  23. case "line","triangle", "rectangle","pentagon","hexagon":
  24. for i in 0..<values.count { // connect all points usning straight lines (basic)
  25. let x1 = values[i]["x"] as! Int
  26. let y1 = values[i]["y"] as! Int
  27. let x2 = values[(i+1)%values.count]["x"] as! Int
  28. let y2 = values[(i+1)%values.count]["y"] as! Int
  29. // skip the boundary Rectangle here
  30. if x1>xMin && x1<xMax {
  31. let from = SCNVector3Make(Float(imageWidth-y1)/convertionRatio+SceneNodeCreator.windowRoot.x, Float(imageWidth-x1)/convertionRatio+SceneNodeCreator.windowRoot.y, SceneNodeCreator.z)
  32. let to = SCNVector3Make(Float(imageWidth-y2)/convertionRatio+SceneNodeCreator.windowRoot.x, Float(imageWidth-x2)/convertionRatio+SceneNodeCreator.windowRoot.y, SceneNodeCreator.z)
  33. scene.rootNode.addChildNode(SceneNodeCreator.createline(from: from, to: to))
  34. }
  35. }
  36. // add shape description
  37. switch values.count {
  38. case 2: // line
  39. let x1 = values[0]["x"] as! Int
  40. let y1 = values[0]["y"] as! Int
  41. let x2 = values[1]["x"] as! Int
  42. let y2 = values[1]["y"] as! Int
  43. if x1>xMin && x1<xMax {
  44. let center = SceneNodeCreator.center(diagonal_p1: (Float(x1),Float(y1)), diagonal_p2: (Float(x2),Float(y2)))
  45. let centerVector = SCNVector3Make((Float(imageWidth)-center.1)/convertionRatio+SceneNodeCreator.windowRoot.x+0.01,
  46. (Float(imageWidth)-center.0)/convertionRatio+SceneNodeCreator.windowRoot.y+0.01,
  47. SceneNodeCreator.z)
  48. scene.rootNode.addChildNode(SceneNodeCreator.create3DText("L", position: centerVector))
  49. }
  50. case 3 : // traingle
  51. let x1 = values[0]["x"] as! Int
  52. let y1 = values[0]["y"] as! Int
  53. let x2 = values[1]["x"] as! Int
  54. let y2 = values[1]["y"] as! Int
  55. let x3 = values[2]["x"] as! Int
  56. let y3 = values[2]["y"] as! Int
  57. if x1>xMin && x1<xMax {
  58. let centroid = SceneNodeCreator.centroidOfTriangle(point1: (Float(x1),Float(y1)), point2: (Float(x2),Float(y2)), point3: (Float(x3),Float(y3)))
  59. let centerVector = SCNVector3Make((Float(imageWidth)-centroid.1)/convertionRatio+SceneNodeCreator.windowRoot.x,
  60. (Float(imageWidth)-centroid.0)/convertionRatio+SceneNodeCreator.windowRoot.y,
  61. SceneNodeCreator.z)
  62. scene.rootNode.addChildNode(SceneNodeCreator.create3DText("T", position: centerVector))
  63. }
  64. case 4: // Rectangle
  65. let x1 = values[0]["x"] as! Int
  66. let y1 = values[0]["y"] as! Int
  67. let x2 = values[2]["x"] as! Int
  68. let y2 = values[2]["y"] as! Int
  69. if x1>xMin && x1<xMax {
  70. let center = SceneNodeCreator.center(diagonal_p1: (Float(x1),Float(y1)), diagonal_p2: (Float(x2),Float(y2)))
  71. let centerVector = SCNVector3Make((Float(imageWidth)-center.1)/convertionRatio+SceneNodeCreator.windowRoot.x,
  72. (Float(imageWidth)-center.0)/convertionRatio+SceneNodeCreator.windowRoot.y,
  73. SceneNodeCreator.z)
  74. scene.rootNode.addChildNode(SceneNodeCreator.create3DText("R", position: centerVector))
  75. }
  76. case 5,6: // pentagon, Hexagon
  77. let x1 = values[0]["x"] as! Int
  78. let y1 = values[0]["y"] as! Int
  79. let x2 = values[3]["x"] as! Int
  80. let y2 = values[3]["y"] as! Int
  81. if x1>xMin && x1<xMax {
  82. let center = SceneNodeCreator.center(diagonal_p1: (Float(x1),Float(y1)), diagonal_p2: (Float(x2),Float(y2)))
  83. let centerVector = SCNVector3Make((Float(imageWidth)-center.1)/convertionRatio+SceneNodeCreator.windowRoot.x,
  84. (Float(imageWidth)-center.0)/convertionRatio+SceneNodeCreator.windowRoot.y,
  85. SceneNodeCreator.z)
  86. let text = (values.count == 5) ? "P" : "H"
  87. scene.rootNode.addChildNode(SceneNodeCreator.create3DText(text, position: centerVector))
  88. }
  89. default:
  90. print("NO Shape")
  91. }
  92. default :
  93. print("This is default for Drawing node ")
  94. }
  95. }
  96. }
  97. // add boundary
  98. scene.rootNode.addChildNode(SceneNodeCreator.boundaryNode())
  99. return scene
  100. }

2
4