Project 1 · CS231M 2015

Released on Monday, April 13, 2015. Due by Friday, April 24, 2015, 11:59 PM.

Instructions

Starter Code

For this assignment, you'll implement Exposure Fusion and Panorama Stitching. To get started, download the starter code.

The starter code includes a fully functional user interface for capturing, processing, and previewing images. We have also implemented the JNI wrappers for you. Your task is to implement the necessary code in Panorama.cpp and HDR.cpp.

Setup

  1. Import the starter code into Eclipse.
  2. Right-click on the imported project, select Properties, and under Android, verify that the OpenCV library path is correct. If not, replace it with the correct reference.
  3. Set the OPENCV_PATH environment variable in Eclipse's preferences (under C / C++ > Build > Environment).
  4. Verify everything builds successfully.

Submission

  1. Your write up must be a PDF file.
    Copy it to your project folder and name it report.pdf.
  2. Execute create-submission.py in the project folder. This should create a zip archive named [sunet-id]-project-1.zip.
  3. The zip archive contains everything that needs to be submitted.
    Email it to cs231m+p1@gmail.com

Honor Code

Your submission must be your own work. No external libraries, besides the ones already referenced by the started code, may be used. We expect all students to adhere to the Stanford Honor Code.

1. Exposure Fusion — 50 Pts + 10 Extra Credit

We'll be following the implementation described in this paper.

However, feel free to treat it as a baseline and experiment with different methods.

1.1 Implementation — 40 Pts

Inside the starter code project, navigate to the file jni/HDR.cpp. All your code for this section will reside inside this file – you do not need to write any additional Java code. Follow the comments in the source code and implement each section.

The starter code contains two presets for testing your HDR algorithm. These are as follows:

  1. Memorial Church Set. This is a pre-aligned image stack.
  2. Grand Canal Set. This stack has to be registered first to produce a reasonable fusion result. See the comments accompanying the align_images function for more details.

Your report should include the output for both test images.

Tip: You can grab a screenshot by holding down the power and volume-down buttons.

1.2 Analysis — 10 Pts

Include a brief dicussion on the following in your project report:

  1. If your input images had a large EV delta, how would it affect the output?
  2. Compare Laplacian pyramid blending to linear blending.
  3. How does the number of levels in pyramid blending affect the output?

1.3 Extra Credit — 10 Pts

Extend your implementation to support mult-image stacks consisting of more than 2 images. To receive credit, your implementation must support non-aligned stacks. Include screenshots of the input stack and the fused output.

2. Panorama Stitching — 50 Pts + 10 Extra Credit

2.1 Implementation — 40 Pts

Inside the starter code project, navigate to the file jni/Panorama.cpp. All your code for this section will reside inside this file – you do not need to write any additional Java code. Follow the comments in the source code and implement each section.

Use the mountain scene preset provided with the starter code to test your implementation. Once you have it working on the preset, test the live mode. Tapping on the screen in the live mode captures an image. Once two images have been captured, they are sent to the stitching pipeline.

Include the stitched output for both – the preset and the live capture session.

2.2 Analysis — 10 Pts

Include a brief discussion on the following in your project report:

  1. In your live examples, how does the distance from the camera to the objects in the scene affect performance? What assumptions have we made by using a homographic transformation to map pixels? Why might this be wrong in some cases?
  2. What might be some issues with multiple images for panoramas?
  3. How would inertial sensor data help you with the panorama? Would acceleration and rotation information be helpful? How?

2.3 Extra Credit — 10 Pts

Implement either one of these features:

  • Multi-image support. Extend the starter code to support more that 2 images. Include a test input sequence and the resulting panorama in your report.
  • Cylindrical projection. Project the images onto a cylinder using OpenCV's Warping functionality to correct for distortions. Include the results in the report and discuss your implementation.