Experiment with dynamic geometry and color.
Create a program that:
Extra credit will be given for making the program interactive - i.e. allow the user to control some part of what happens with the keyboard and/or mouse
E-mail your program, and any required data files, to me at depape@buffalo.edu
Be sure to include your name, and the OS you wrote the program on, in comments in the code.
Colors in the physical world can be any wavelength, or combination of wavelengths, of light
Color | Wavelength |
---|---|
Violet | 420 nm |
Blue | 470 nm |
Green | 530 nm |
Yellow | 580 nm |
Orange | 620 nm |
Red | 700 nm |
Rods & cones absorb light, send signal to brain
Any visible wavelength is perceived the same as some
combination of 3 basic colors
(roughly blue, green, and red)
RGB = Red , Green , Blue
Each component (R, G, or B), ranges from a minimum (no intensity)
to a maximum (full intensity), typically 0.0 to 1.0.
Computer numbers have a finite resolution - how many distinct values can be represented
24 bit color = 8 bits red + 8 bits green + 8 bits blue
(a.k.a. 8 bits per component)
8 bits = 256 possible values
32 bit color usually means 8 bits red + 8 bits green + 8 bits blue + 8 bits alpha
16 bit color can be 5 bits red + 6 bits green + 5 bits blue
HDRI: High Dynamic Range Imaging - uses 16 or 32 bits per component
CMY = Cyan , Magenta , Yellow
C = 1.0 - R
M = 1.0 - G
Y = 1.0 - B
CMYK = Cyan , Magenta , Yellow , Black
HSV = Hue , Saturation , Value
The "brightness" of a color.
Formula, used in NTSC television standard, based on human perception:
0.30 * R + 0.59 * G + 0.11 * B
Background | Luminance |
---|---|
Moonless overcast night sky | 0.00003 cd/m^2 |
Moonlit clear night sky | 0.03 |
Twighlight sky | 3 |
Overcast day sky | 300 |
Day sky with sunlit clouds | 30,000 |
Rods & cones adapt to average level of illumination
Rods most sensitive at low levels (scotopic vision)
Cones more sensitive at higher levels (photopic vision)
Historically, displays have been divided between vector and raster
vector - pictures are drawn as a set of precise lines, connecting arbitrary points on the screen
raster - pictures are drawn by scanning the screen in a discrete sequence of
rows
A similar distinction continues - graphical objects can be described by images or geometry
A digital image is a 2 dimensional array of pixel colors
Pixel = "picture element"
Each pixel is a sample of a continuous, analog image
Pointillism can be considered to take a similar approach -
breaking an image down into discrete samples
Basic image data in computer memory is a stream of numbers
1 239 120 1 1 37 94 8 92 31 80 92 134 89 2 3 50 9 3 10 93 109 134 ...
Important additional information is:
Total memory needed for a simple image:
width * height * components * bytes_per_component
e.g. a 512x512, RGB, 8-bit image requires 768 kilobytes
The frame buffer is a chunk of graphics card memory that contains what is displayed on the screen.
Like an image, but for each pixel there can be additional data besides color - depth, masking, etc.
OpenGL renders shapes, images, etc. into pixels of the frame buffer.
rasterizing it - converting it into
raster form in the frame buffer.
Create a Python program to do the following: