Astronomy

Rendering stars in 3D space - AbsMag to OpenGL scale values

Rendering stars in 3D space - AbsMag to OpenGL scale values


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I am using the AMNH's Digital Universe stars.speck file to render stars in 3D space. The speck contains parsec-scale coortinates, where our Sun is at 0,0,0. It also lists the AbsMag values - i.e the luminosity from 10 parsecs away. I want to now scale my star textures in openGL, so the resulting star sizes are accurate. How could I convert the AbsMag inverse logarithmic scale to a scale value of 0-infinity, where 0.0 means the star goes away, and 1.0 is no change? Obviously can't make the Sun stay at 1, as that would make it huge.


"Accurate star size" is a problem. Obviously you do not have the resolution on your screen for accurate angular resolution of stars (they would require an absurdly fine resolution), and the dynamic range of brightness is also too low -- stars range over many orders of magnitude in brightness, far more than a screen can show. Deeply annoying. What one might want to do is to make each star have a brightness and screen size that is proportional to their actual brightness in the sky to give the same feeling as the sky would give. This is still very tough, since screens have different gamma correction. To top it off, the human eye actually has a pretty logarithmic response to light (this is why magnitudes and gamma correction make sense).

Here is a rough idea. A star with AbsMag of $M$ has an actual luminosity of $$L = L_odot 10^{0.4(M_odot - M)}$$ where $L_odot$ is the luminosity of the sun and $M_odot$ is the absolute magnitude of the sun. Things get easy if we just count luminosity in terms of $L_odot$, making the sun one unit of luminosity.

A spot of radius $r$ on the screen with luminance $l$ radiates power as $P=pi r^2 l$. That luminance is due to the gamma-corrected luma value $V$ the computer displays: $l = K V^gamma$.

Assuming the radius changes with luminosity too as $r(L)$, I would try $r(L)=r_0 L^a$ where $aapprox 0.6$ (but this is guesswork) and the sun has radius $r_0$ pixels.

So trying to put this together, we get the pixel brightness as $$V = V_0 (L/ r(L)^2)^{1/gamma} = V'_0 L^{(1-2a)/gamma}$$ where $V_0$ and $V'_0$ are the pixel brightnesses used for the sun in this model. Basically this squashes the actual luminosity with $a$ (representing using bigger spots for brighter stars, not requiring as intense pixels) and $gamma$ (to correct for the screen and the eye). What values to use will largely be trial and error unless you want to try to use screen photometry equipment.

So the full formula converting from absolute magnitude to pixel value would be: $$V = V_0 10^{0.4(4.83 - M)(1-2a)/gamma}.$$


This is not an answer, but may help explain why displaying stars with accurate angular diameters is a bad idea.

Per https://www.eso.org/public/usa/news/eso9706/ the star with the largest angular diameter is R Doradus at 0.057 arcseconds which translates to about 1/63158th of a degree.

Even if you used a "tight" 9 degree view for your canvas, 0.057 seconds would take up only 1/568421 of the screen width. In other words, your image would have to be at least 284211 pixels for the star to be 0.5 pixel wide, the smallest amount you could reasonably round up to 1 pixel.

Even our eyes don't see stars this way, or the night sky would appear completely black (excluding the moon, planets, and Earth-generated light of course).

As good an idea as it seems, display stars as their actual size is a bad idea because that's now cameras or even our own eyes see them.


Celestia Forums

These are the data files along with the celestia.cfg file for my usual Celestia as of November 28, 2019. If I find enough time, then I may update this regularly in the future.

#************************************************************************
# Celestia Configuration File - edited 3/1/12
#
# This file contains configuration data read by Celestia each time it
# is run. Many of the items may be changed to suit your specific needs
# or requirements. PLEASE make a backup copy of this file before you
# make any changes to it.
#
# To learn more about Celestia, visit the Celestia forums at:
# http://www.shatters.net/forum/
# or the Celestia web site at: http://www.shatters.net/celestia/
#************************************************************************

#------------------------------------------------------------------------
# This section contains a list of data files that Celestia uses to load
# information about stars, constellations and locations. DO NOT change
# these file names or the order in which they are listed, unless you
# know exactly what you are doing. Most of these files can be viewed
# with a plain text editor. Discussion about their content and formats
# can be found on the Celestia forums: http://www.shatters.net/forum/
#
# If you want to load all your stars from .stc files, you can now comment
# out the StarDatabase entry.
#------------------------------------------------------------------------
StarDatabase "data/stars1M.dat"
StarNameDatabase "data/starnames-ED.dat" #replaces starnames.dat
StarCatalogs [ "data/revised.stc"
"data/extrasolar.stc"
"data/nearstars.stc"
"data/charm2.stc"
"data/visualbins.stc"
"data/spectbins.stc" ]

HDCrossIndex "data/hdxindex.dat"
SAOCrossIndex "data/saoxindex.dat"
GlieseCrossIndex "data/gliesexindex.dat"

SolarSystemCatalogs [ "data/solarsys-educational.ssc"
"data/asteroids-educational.ssc"
"data/comets-educational.ssc"
"data/outersys-educational.ssc"
"data/DIRL_comets_v3.02.ssc"
#"data/spacecraft.ssc"
"data/extrasolar.ssc"
#"data/solsys_locs-educational.ssc"
#"data/eros_locs.ssc"
#"data/gaspra_locs.ssc"
#"data/ida_locs.ssc"
#"data/merc_locs.ssc"
#"data/venus_locs.ssc"
#"data/earth_locs.ssc"
"data/mars_locs.ssc"
"data/moon_locs.ssc"
#"data/marsmoons_locs.ssc"
#"data/jupitermoons_locs.ssc"
#"data/saturnmoons_locs.ssc"
#"data/uranusmoons_locs.ssc"
#"data/neptunemoons_locs.ssc"
"data/ring_locs.ssc"
"data/world-capitals.ssc" ]
DeepSkyCatalogs [ "data/galaxies2.dsc"
"data/globulars.dsc" ]

AsterismsFile "data/asterisms.dat"
BoundariesFile "data/boundaries.dat"

#------------------------------------------------------------------------
# Default star textures for each spectral type
#
# The default textures may be overridden in individual star definitions.
#------------------------------------------------------------------------
StarTextures
<
# This texture will be used for any spectral type not listed
# in this block.
Default "gstar.*"

#------------------------------------------------------------------------
# User Interface files .
#
# Despite their ".cel" file extension, these are not CEL scripts, but
# rather data files that populate controls such as menus and dialog
# boxes.
#
# FavoritesFile
# -------------
# This is where Bookmarks data are stored. The file does not exist until
# you save a Bookmark from within Celestia. You can view this file with
# a plain text editor and if you write CEL scripts, it contains some
# useful information.
#
# DestinationFile
# ---------------
# This is the list of Destinations used in the Tour Guide dialog box,
# accessed via the Navigation Menu. You can edit this file with a plain
# text editor to add your own destinations to the dialog box. The order
# in which the items are listed in the file is the order in which they
# will be listed in the Tour Guide dialog.
#
# Cursor
# ------
# This parameter allows you to select from three cursors, but currently
# only in the Windows version of Celestia .
# * White crosshair ("crosshair") --> default cursor
# * Inverting crosshair ("inverting crosshair")
# * Standard Windows arrow ("arrow")
#
# The inverting crosshair can be a better choice because it's more
# visible on bright backgrounds. However, should you decide to try this
# cursor, TEST IT CLOSELY. Not all graphics chipsets support an inverting
# cursor, which will cause Windows to fall back to software emulation.
# The emulated cursor interacts with OpenGL applications in unfortunate
# ways, forcing a lot of extra redrawing and cutting by half the frame
# rate on a GeForce2-equipped laptop. So, if you change this, check your
# FPS rates to make sure you haven't kicked Windows into software
# emulation mode.
#------------------------------------------------------------------------
FavoritesFile "favorites.cel"
DestinationFile "guide.cel"
Cursor "crosshair"

# turns on the LUA educational interface

#------------------------------------------------------------------------
# Included CEL script files.
#
# The following CEL script files are included in the basic Celestia
# distribution. These script files may be viewed and edited with a
# plain text editor. They may both be modified or replaced to suit your
# specific needs.
#
# InitScript is the CEL script that is automatically run each time
# Celestia is started. The default script (start.cel) travels to Io, one
# of Jupiter's moons.
#
# DemoScript is the CEL script that is run when you press the "d" key
# on your keyboard from within Celestia. The default script (demo.cel)
# takes you on a short tour of some interesting places in our solar
# system.
#
# To learn more about how to use and write CEL scripts and Lua scripts
# in Celestia, please visit the Celestia Scripting forum at:
# http://www.shatters.net/forum/viewforum.php?f=9
#------------------------------------------------------------------------
InitScript "SlowerGo-start-ED.celx"
DemoScript "demo.cel"

#------------------------------------------------------------------------
# The 'extras' directory is located under the celestia root directory
# and is used for storing third-party add-ons to Celestia. To learn
# more about Add-Ons for Celestia, visit the Celestia Add-Ons forum at:
# http://www.shatters.net/forum/viewforum.php?f=6
#
# You may specify additional add-on directories by adding additional
# entries, such as the following example shows:
# ExtrasDirectories [ "extras" "myextras1" "myextras2" ]
#
# To specify absolute paths on windows, you either have to use "/" or
# double backslashes to seperate path components. Example:
# ExtrasDirectories [ "D:/celestia-extras" ]
# or
# ExtrasDirectories [ "D:celestia-extras" ]
#------------------------------------------------------------------------
ExtrasDirectories [ "extras" "educational-extras/general"]

#------------------------------------------------------------------------
# Font definitions.
#
# The following entries define the fonts Celestia will use to display
# text on the display screen. To view the list of fonts available with
# your distribution of Celestia, look in the fonts directory located
# under the Celestia root directory. The default fonts are UTF-8
# compatible in order to display non-English characters.
#
# Font: Used to display all informational text.
# Default: "sans12.txf"
#
# LabelFont: Used to display all label text (objects, locations, etc.).
# Default "sans12.txf"
#
# TitleFont: Used to display object names, messages, and script text.
# Default "sansbold20.txf"
#------------------------------------------------------------------------
Font "sans12.txf"
LabelFont "sans12.txf"
TitleFont "sansbold20.txf"

#------------------------------------------------------------------------
# FaintestVisibleMagnitude defines the lowest magnitude at which a star
# will be displayed in Celestia. This setting may be adjusted real-time
# via the '[' and ']' keys in Celestia. The default value is 6.0.
#------------------------------------------------------------------------
FaintestVisibleMagnitude 9.2

#------------------------------------------------------------------------
# RotateAcceleration defines the speed at which an object will be
# rotated in Celestia, when using a keypress, such as the left and right
# arrow keys. A higher value will rotate the object quicker, while a
# lower value will cause a slower rotation. The default value is 120.0.
#------------------------------------------------------------------------
RotateAcceleration 40.0

#------------------------------------------------------------------------
# MouseRotationSensitivity defines the speed at which an object will be
# rotated in Celestia, when using the mouse -- press both mouse-buttons
# or Ctrl+LeftMouseButton, and move the mouse left or right. A higher
# value will rotate the object quicker, while a lower value will cause
# a slower rotation. A value of 0.0 (zero) will disable this particluar
# feature. The default value is 1.0.
#------------------------------------------------------------------------
MouseRotationSensitivity 1.0

#------------------------------------------------------------------------
# The following parameter is used in Lua (.celx) scripting.
#
# ScriptScreenshotDirectory defines the directory where screenshots
# are to be stored. The default value is "", i.e. Celestia's
# installation directory.
#------------------------------------------------------------------------
ScriptScreenshotDirectory ""

#------------------------------------------------------------------------
# CELX-scripts can request permission to perform dangerous operations,
# such as reading, writing and deleting files or executing external
# programs. If granted, a malicious script could use this to destroy
# data or compromise system security.
# The following parameter determines what Celestia does upon such
# requests:
# "ask": ask the user if he want's to allow access (default)
# "allow": always allow such requests
# "deny": always deny such requests
#------------------------------------------------------------------------
ScriptSystemAccessPolicy "ask"

#------------------------------------------------------------------------
# The following lines are render detail settings. Assigning higher
# values will produce better quality images, but may cause some older
# systems to run slower.
#
# OrbitPathSamplePoints defines how many sample points to use when
# rendering orbit paths. The default value is 100.
#
# RingSystemSections defines the number of segments in which ring
# systems are rendered. The default value is 100.
#
# ShadowTextureSize defines the size* of shadow texture to be used.
# The default value is 256. Maximum useful value is 2048.
#
# EclipseTextureSize defines the size* of eclipse texture to be used.
# The default value is 128. Maximum useful value is 1024.
#
# * The ShadowTextureSize and EclipseTextureSize values should both be
# powers of two (128, 256, 512, etc.). Using larger values will
# reduce the jagged edges of eclipse shadows and shadows on planet
# rings, but it will decrease the amount of memory available for
# planet textures.
#------------------------------------------------------------------------
OrbitPathSamplePoints 100
RingSystemSections 512

ShadowTextureSize 1024
EclipseTextureSize 512

#-----------------------------------------------------------------------
# Set the level of multisample antialiasing. Not all 3D graphics
# hardware supports antialiasing, though most newer graphics chipsets
# do. Larger values will result in smoother edges with a cost in
# rendering speed. 4 is a sensible setting for recent, higher-end
# graphics hardware 2 is probably better mid-range graphics. The
# default value is 1, which disables antialiasing.
AntialiasingSamples 4

#------------------------------------------------------------------------
# The following line is commented out by default.
#
# Celestia enables and disables certain rendering features based on
# the set of extensions supported by the installed OpenGL driver and 3D
# graphics hardware. With IgnoreGLExtensions, you may specify a list of
# extensions that Celestia will treat as unsupported. This is useful
# primarily for the developers of Celestia.
#------------------------------------------------------------------------
# IgnoreGLExtensions [ "GL_ARB_vertex_program" ]

Notes:
* Some of the details are completely fantasy - for example, various stars may be given the names of The Fairly OddParents and Danny Phantom characters, such as the names "Timmy", "Cosmo", "Wanda", "Jazz", "Jack", and "Maddie" for Barnard's Star, Luyten 726-8 B and A, Y Canum Venaticorum, Betelgeuse, and Antares respectively. You may want to edit them out.
* This does NOT include all the addons I have made.
* This is designed especially for use in Celestia 1.6.1.-ED, thus this may conflict with other addons if used with normal Celestia (1.6.1. or 1.7.0.) or Celestia Origin. But of course, edit these files to make them compatible.


3D Graphics with OpenGL

Modern day computer has dedicated Graphics Processing Unit (GPU) to produce images for the display, with its own graphics memory (or Video RAM or VRAM).

Pixels and Frame

All modern displays are raster-based. A raster is a 2D rectangular grid of pixels (or picture elements). A pixel has two properties: a color and a position. Color is expressed in RGB (Red-Green-Blue) components - typically 8 bits per component or 24 bits per pixel (or true color). The position is expressed in terms of (x, y) coordinates. The origin (0, 0) is located at the top-left corner, with x-axis pointing right and y-axis pointing down. This is different from the conventional 2D Cartesian coordinates, where y-axis is pointing upwards.

The number of color-bits per pixel is called the depth (or precision) of the display. The number of rows by columns of the rectangular grid is called the resolution of the display, which can range from 640x480 (VGA), 800x600 (SVGA), 1024x768 (XGA) to 1920x1080 (FHD), or even higher.

Frame Buffer and Refresh Rate

The color values of the pixels are stored in a special part of graphics memory called frame buffer. The GPU writes the color value into the frame buffer. The display reads the color values from the frame buffer row-by-row, from left-to-right, top-to-bottom, and puts each of the values onto the screen. This is known as raster-scan. The display refreshes its screen several dozen times per second, typically 60Hz for LCD monitors and higher for CRT tubes. This is known as the refresh rate.

A complete screen image is called a frame.

Double Buffering and VSync

While the display is reading from the frame buffer to display the current frame, we might be updating its contents for the next frame (not necessarily in raster-scan manner). This would result in the so-called tearing, in which the screen shows parts of the old frame and parts of the new frame.

This could be resolved by using so-called double buffering. Instead of using a single frame buffer, modern GPU uses two of them: a front buffer and a back buffer. The display reads from the front buffer, while we can write the next frame to the back buffer. When we finish, we signal to GPU to swap the front and back buffer (known as buffer swap or page flip).

Double buffering alone does not solve the entire problem, as the buffer swap might occur at an inappropriate time, for example, while the display is in the middle of displaying the old frame. This is resolved via the so-called vertical synchronization (or VSync) at the end of the raster-scan. When we signal to the GPU to do a buffer swap, the GPU will wait till the next VSync to perform the actual swap, after the entire current frame is displayed.

The most important point is: When the VSync buffer-swap is enabled, you cannot refresh the display faster than the refresh rate of the display. For the LCD/LED displays, the refresh rate is typically locked at 60Hz or 60 frames per second, or 16.7 milliseconds for each frame. Furthermore, if you application refreshes at a fixed rate, the resultant refresh rate is likely to be an integral factor of the display's refresh rate, i.e., 1/2, 1/3, 1/4, etc.

3D Graphics Rendering Pipeline

A pipeline, in computing terminology, refers to a series of processing stages in which the output from one stage is fed as the input of the next stage, similar to a factory assembly line or water/oil pipe. With massive parallelism, pipeline can greatly improve the overall throughput.

In computer graphics, rendering is the process of producing image on the display from model description.

The 3D Graphics Rendering Pipeline accepts description of 3D objects in terms of vertices of primitives (such as triangle, point, line and quad), and produces the color-value for the pixels on the display.

The 3D graphics rendering pipeline consists of the following main stages:

  1. Vertex Processing: Process and transform individual vertices.
  2. Rasterization: Convert each primitive (connected vertices) into a set of fragments. A fragment can be treated as a pixel in 3D spaces, which is aligned with the pixel grid, with attributes such as position, color, normal and texture.
  3. Fragment Processing: Process individual fragments.
  4. Output Merging: Combine the fragments of all primitives (in 3D space) into 2D color-pixel for the display.

In modern GPUs, the vertex processing stage and fragment processing stage are programmable. You can write programs, known as vertex shader and fragment shader to perform your custom transform for vertices and fragments. The shader programs are written in C-like high level languages such as GLSL (OpenGL Shading Language), HLSL (High-Level Shading Language for Microsoft Direct3D), or Cg (C for Graphics by NVIDIA).

On the other hand, the rasterization and output merging stages are not programmable, but configurable - via configuration commands issued to the GPU.

Vertices, Primitives, Fragment and Pixels

3D Graphics Coordinate Systems

OpenGL adopts the Right-Hand Coordinate System (RHS). In the RHS, the x-axis is pointing right, y-axis is pointing up, and z-axis is pointing out of the screen. With your right-hand fingers curving from the x-axis towards the y-axis, the thumb is pointing at the z-axis. RHS is counter-clockwise (CCW). The 3D Cartesian Coordinates is a RHS.

Some graphics software (such as Microsoft Direct3D) use Left-hand System (LHS), where the z-axis is inverted. LHS is clockwise (CW). In this article, we shall adopt the RHS and CCW used in OpenGL.

Primitives

The inputs to the Graphics Rendering Pipeline are geometric primitives (such as triangle, point, line or quad), which is formed by one or more vertices.

OpenGL supports three classes of geometric primitives: points, line segments, and closed polygons. They are specified via vertices. Each vertex is associated with its attributes such as the position, color, normal and texture. OpenGL provides 10 primitives as shown. Sphere, 3D box and pyramid are not primitives. They are typically assembled using primitive triangle or quad.

Vertices

Recall that a primitive is made up of one or more vertices. A vertex, in computer graphics, has these attributes:

  1. Position in 3D space V=(x, y, z): typically expressed in floating point numbers.
  2. Color: expressed in RGB (Red-Green-Blue) or RGBA (Red-Green-Blue-Alpha) components. The component values are typically normalized to the range of 0.0 and 1.0 (or 8-bit unsigned integer between 0 and 255). Alpha is used to specify the transparency, with alpha of 0 for totally transparent and alpha of 1 for opaque.
  3. Vertex-Normal N=(nx, ny, nz): We are familiar with the concept of surface normal, where the normal vector is perpendicular to the surface. In computer graphics, however, we need to attach a normal vector to each vertex, known as vertex-normal. Normals are used to differentiate the front- and back-face, and for other processing such as lighting. Right-hand rule (or counter-clockwise) is used in OpenGL. The normal is pointing outwards, indicating the outer surface (or front-face).
  4. Texture T=(s, t): In computer graphics, we often wrap a 2D image to an object to make it seen realistic. A vertex could have a 2D texture coordinates (s, t), which provides a reference point to a 2D texture image.
  5. Others.
OpenGL Primitives and Vertices

As an example, the following OpenGL code segment specifies a color-cube, center at the origin.

To create a geometric object or model, we use a pair of glBegin(PrimitiveType) and glEnd() to enclose the vertices that form the model. For primitiveType that ends with 'S' (e.g., GL_QUADS ), we can define multiple shapes of the same type.

Each of the 6 faces is a primitive quad ( GL_QUAD ). We first set the color via glColor3f(red, green, blue) . This color would be applied to all subsequent vertices until it is overridden. The 4 vertices of the quad are specified via glVertex3f(x, y, z) , in counter-clockwise manner such that the surface-normal is pointing outwards, indicating its front-face. All four vertices has this surface-normal as its vertex-normal.

Indexed Vertices

Primitives often share vertices. Instead of repeatedly specifying the vertices, it is more efficient to create an index list of vertices, and use the indexes in specifying the primitives.

For example, the following code fragment specifies a pyramid, which is formed by 5 vertices. We first define 5 vertices in an index array, followed by their respectively color. For each of the 5 faces, we simply provide the vertex index and color index.

Pixel vs. Fragment

Pixels refers to the dots on the display, which are aligned in a 2-dimensional grid of a certain rows and columns corresponding to the display's resolution. A pixel is 2-dimensional, with a (x, y) position and a RGB color value (there is no alpha value for pixels). The purpose of the Graphics Rendering Pipeline is to produce the color-value for all the pixels for displaying on the screen, given the input primitives.

In order to produce the grid-aligned pixels for the display, the rasterizer of the graphics rendering pipeline, as its name implied, takes each input primitive and perform raster-scan to produce a set of grid-aligned fragments enclosed within the primitive. A fragment is 3-dimensional, with a (x, y, z) position. The (x, y) are aligned with the 2D pixel-grid. The z-value (not grid-aligned) denotes its depth. The z-values are needed to capture the relative depth of various primitives, so that the occluded objects can be discarded (or the alpha channel of transparent objects processed) in the output-merging stage.

Fragments are produced via interpolation of the vertices. Hence, a fragment has all the vertex's attributes such as color, fragment-normal and texture coordinates.

In modern GPU, vertex processing and fragment processing are programmable. The programs are called vertex shader and fragment shader.

(Direct3D uses the term "pixel" for "fragment".)

Vertex Processing

Coordinates Transformation

The process used to produce a 3D scene on the display in Computer Graphics is like taking a photograph with a camera. It involves four transformations:

  1. Arrange the objects (or models, or avatar) in the world (Model Transformation or World transformation).
  2. Position and orientation the camera (View transformation).
  3. Select a camera lens (wide angle, normal or telescopic), adjust the focus length and zoom factor to set the camera's field of view (Projection transformation).
  4. Print the photo on a selected area of the paper (Viewport transformation) - in rasterization stage

A transform converts a vertex V from one space (or coordinate system) to another space V'. In computer graphics, transform is carried by multiplying the vector with a transformation matrix, i.e., V' = M V.

Model Transform (or Local Transform, or World Transform)

Each object (or model or avatar) in a 3D scene is typically drawn in its own coordinate system, known as its model space (or local space, or object space). As we assemble the objects, we need to transform the vertices from their local spaces to the world space, which is common to all the objects. This is known as the world transform. The world transform consists of a series of scaling (scale the object to match the dimensions of the world), rotation (align the axes), and translation (move the origin).

Rotation and scaling belong to a class of transformation called linear transformation (by definition, a linear transformation preserves vector addition and scalar multiplication). Linear transform and translation form the so-called affine transformation. Under an affine transformation, a straight line remains a straight line and ratios of distances between points are preserved.

In OpenGL, a vertex V at (x, y, z) is represented as a 3x1 column vector:

Other systems, such as Direct3D, use a row vector to represent a vertex.

Scaling

3D scaling can be represented in a 3x3 matrix:

where &alphax, &alphay and &alphaz represent the scaling factors in x, y and z direction, respectively. If all the factors are the same, it is called uniform scaling.

We can obtain the transformed result V' of vertex V via matrix multiplication, as follows:

Rotation

3D rotation operates about an axis of rotation (2D rotation operates about a center of rotation). 3D Rotations about the x, y and z axes for an angle &theta (measured in counter-clockwise manner) can be represented in the following 3x3 matrices:

The rotational angles about x, y and z axes, denoted as &thetax, &thetay and &thetaz, are known as Euler angles, which can be used to specify any arbitrary orientation of an object. The combined transform is called Euler transform.

[TODO] Link to Proof and illustration

Translation

Translation does not belong to linear transform, but can be modeled via a vector addition, as follows:

Fortunately, we can represent translation using a 4x4 matrices and obtain the transformed result via matrix multiplication, if the vertices are represented in the so-called 4-component homogeneous coordinates (x, y, z, 1), with an additional forth w-component of 1. We shall describe the significance of the w-component later in projection transform. In general, if the w-component is not equal to 1, then (x, y, z, w) corresponds to Cartesian coordinates of (x/w, y/w, z/w). If w=0, it represents a vector, instead of a point (or vertex).

Using the 4-component homogeneous coordinates, translation can be represented in a 4x4 matrix, as follows:

The transformed vertex V' can again be computed via matrix multiplication:

[TODO] Link to homogeneous coordinates

Summary of Affine Transformations

We rewrite the scaling and rotation into 4x4 matrices using the homogenous coordinates.

Successive Transforms

A series of successive affine transforms (T1, T2, T3, . ) operating on a vertex V can be computed via concatenated matrix multiplications V' = . T3T2T1V. The matrices can be combined before applying to the vertex because matrix multiplication is associative, i.e., T3 (T2 (T1 V) ) = ( T3T2T1 ) V.

Example
Transformation of Vertex-Normal

Recall that a vector has a vertex-normal, in addition to (x, y, z) position and color.

Suppose that M is a transform matrix, it can be applied to vertex-normal only if the transforms does not include non-uniform scaling. Otherwise, the transformed normal will not be orthogonal to the surface. For non-uniform scaling, we could use (M -1 ) T as the transform matrix, which ensure that the transformed normal remains orthogonal.

View Transform

After the world transform, all the objects are assembled into the world space. We shall now place the camera to capture the view.

Positioning the Camera

In 3D graphics, we position the camera onto the world space by specifying three view parameters: EYE, AT and UP, in world space.

  1. The point EYE (ex, ey, ez) defines the location of the camera.
  2. The vector AT (ax, ay, az) denotes the direction where the camera is aiming at, usually at the center of the world or an object.
  3. The vector UP (ux, uy, uz) denotes the upward orientation of the camera roughly. UP is typically coincided with the y-axis of the world space. UP is roughly orthogonal to AT, but not necessary. As UP and AT define a plane, we can construct an orthogonal vector to AT in the camera space.

Notice that the 9 values actually produce 6 degrees of freedom to position and orientate the camera, i.e., 3 of them are not independent.

OpenGL

In OpenGL, we can use the GLU function gluLookAt() to position the camera:

The default settings of gluLookAt() is:

That is, the camera is positioned at the origin (0, 0, 0), aimed into the screen (negative z-axis), and faced upwards (positive y-axis). To use the default settings, you have to place the objects at negative z-values.

Computing the Camera Coordinates

From EYE, AT and UP, we first form the coordinate (xc, yc, zc) for the camera, relative to the world space. We fix zc to be the opposite of AT, i.e., AT is pointing at the -zc. We can obtain the direction of xc by taking the cross-product of AT and UP. Finally, we get the direction of yc by taking the cross-product of xc and zc. Take note that UP is roughly, but not necessarily, orthogonal to AT.

Transforming from World Space to Camera Space

It is much more convenience to express all the coordinates in the camera space. This is done via view transform.

The view transform consists of two operations: a translation (for moving EYE to the origin), followed by a rotation (to axis the axes):

The View Matrix

We can combine the two operations into one single View Matrix:

Model-View Transform

In Computer Graphics, moving the objects relative to a fixed camera (Model transform), and moving the camera relative to a fixed object (View transform) produce the same image, and therefore are equivalent. OpenGL, therefore, manages the Model transform and View transform in the same manner on a so-called Model-View matrix. Projection transformation (in the next section) is managed via a Projection matrix.

Projection Transform - Perspective Projection

Once the camera is positioned and oriented, we need to decide what it can see (analogous to choosing the camera's field of view by adjusting the focus length and zoom factor), and how the objects are projected onto the screen. This is done by selecting a projection mode (perspective or orthographic) and specifying a viewing volume or clipping volume. Objects outside the clipping volume are clipped out of the scene and cannot be seen.

View Frustum in Perspective View

The camera has a limited field of view, which exhibits a view frustum (truncated pyramid), and is specified by four parameters: fovy, aspect, zNear and zFar.

  1. Fovy: specify the total vertical angle of view in degrees.
  2. Aspect: the ratio of width vs. height. For a particular z, we can get the height from the fovy, and then get the width from the aspect.
  3. zNear the near plane.
  4. zFar: the far plane.

The projection with view frustum is known as perspective projection, where objects nearer to the COP (Center of Projection) appear larger than objects further to the COP of the same size.

An object outside the view frustum is not visible to the camera. It does not contribute to the final image and shall be discarded to improve the performance. This is known as view-frustum culling. If an object partially overlaps with the view frustum, it will be clipped in the later stage.

OpenGL

In OpenGL, there are two functions for choosing the perspective projection and setting its clipping volume:

Clipping-Volume Cuboid

Next, we shall apply a so-called projection matrix to transform the view-frustum into a axis-aligned cuboid clipping-volume of 2x2x1 centered on the near plane, as illustrated. The near plane has z=0, whereas the far plane has z=-1. The planes have dimension of 2x2, with range from -1 to +1.

The Perspective Projection Matrix

The projection matrix is given by:

Take note that the last row of the matrix is no longer [0 0 0 1]. With input vertex of (x, y, z, 1), the resultant w-component would not be 1. We need to normalize the resultant homogeneous coordinates (x, y, z, w) to (x/w, y/w, z/w, 1) to obtain position in 3D space. (It is amazing that homogeneous coordinates can be used for translation, as well as the perspective projection.)

The final step is to flip the z-axis, so that the near plane is still located at z=0, but the far plane is flipped and located at z=1 (instead of z=-1). In other words, the larger the z, the further is the object. To perform flipping, we can simply negate the third row of the projection matrix.

After the flip, the coordinate system is no longer a Right-Hand System (RHS), but becomes a Left-hand System (LHS).

OpenGL's Model-View Matrix and Projection Matrix

OpenGL manages the transforms via two matrices: a model-view matrix ( GL_MODELVIEW for handling model and view transforms) and a projection matrix ( GL_PROJECTION for handling projection transform). These two matrices can be manipulated independently.

We need to first select the matrix for manipulation via:

We can reset the currently selected matrix via:

We can save the value of the currently selected matrix onto the stack and restore it back via:

Push and pop use a stack and operate in a last-in-first-out manner, and can be nested.

Projection Transform - Orthographic Projection

Besides the commonly-used perspective projection, there is another so-called orthographic projection (or parallel projection), which is a special case where the camera is placed very far away from the world (analogous to using telescopic lens). The view volume for orthographic projection is a parallelepiped (instead of a frustum in perspective projection).

OpenGL

In OpenGL, we can use glOrtho() function to choose the orthographic projection mode and specify its clipping volume:

For 2D graphics, we can use gluOrtho2D() (GLU function instead of GL) to choose 2D orthographic projection and set its clipping area:

The default 3D projection in OpenGL is the orthographic (instead of perspective) with parameters (-1.0, 1.0, -1.0, 1.0, -1.0, 1.0), i.e., a cube with sides of 2.0, centered at origin.

Outputs of the Vertex Processing Stage

Each vertex is transformed and positioned in the clipping-volume cuboid space, together with their vertex-normal. The x and y coordinates (in the range of -1 to +1) represent its position on the screen, and the z value (in the range of 0 to 1) represents its depth, i.e., how far away from the near plane.

The vertex processing stage transform individual vertices. The relationships between vertices (i.e., primitives) are not considered in this stage.

Rasterization

In the previous vertex processing stage, the vertices, which is usually represented in a float value, are not necessarily aligned with the pixel-grid of the display. The relationship of vertices, in term of primitives, are also not considered.

In this rasterization stage, each primitive (such as triangle, quad, point and line), which is defined by one or more vertices, are raster-scan to obtain a set of fragments enclosed within the primitive. Fragments can be treated as 3D pixels, which are aligned with the pixel-grid. The 2D pixels have a position and a RGB color value. The 3D fragments, which are interpolated from the vertices, have the same set of attributes as the vertices, such as position, color, normal, texture.

The substages of rasterization include viewport transform, clipping, perspective division, back-face culling, and scan conversion. The rasterizer is not programmable, but configurable via the directives.

Viewport Transform

Viewport

Viewport is a rectangular display area on the application window, which is measured in screen's coordinates (in pixels, with origin at the top-left corner). A viewport defines the size and shape of the display area to map the projected scene captured by the camera onto the application window. It may or may not occupy the entire screen.

In 3D graphics, a viewport is 3-dimensional to support z-ordering, which is needed for situations such as ordering of overlapping windows.

OpenGL

In OpenGL, by default, the viewport is set to cover the entire application window. We can use the glViewport() function to choose a smaller area (e.g., for split-screen or multi-screen application).

We can also set the z-range of viewport via glDepthRange() :

Viewport Transform

Our final transform, viewport transform, maps the clipping-volume (2x2x1 cuboid) to the 3D viewport, as illustrated.

Viewport transform is made up of a series of reflection (of y-axis), scaling (of x, y and z axes), and translation (of the origin from the center of the near plane of clipping volume to the top-left corner of the 3D viewport). The viewport transform matrix is given by:

If the viewport cover the entire screen, minX=minY=minZ=0 , w=screenWidth and h=screenHeight .

Aspect Ratios of Viewport and Projection Plane

It is obvious that if the aspect ratio of the viewport (set via glViewport() ) and the projection plane (set via gluPerspective() , glOrtho() ) are not the same, the shapes will be distorted. Hence, it is important to use the same aspect ratio for the viewport and the projection plane.

The glViewport() command should be included in reshaped() handler, so as to re-size the viewport whenever the window is re-sized. It is important that the aspect ratio of the projection plane is re-configure to match the viewport's aspect ratio, in order not to distort the shapes. In other words, glViewport() and gluPerpective()/glOrtho() should be issued together.

Back-Face Culling

While view frustum culling discard objects outside the view frustum, back-face culling discard primitives which is not facing the camera.

Back face can be declared based on the normal vector and the vector connecting the surface and the camera.

Back-face culling shall not be enabled if the object is transparent and alpha blending is enabled.

OpenGL

In OpenGL, face culling is disabled by default, and both front-face and back-faces are rendered. We can use function glCullFace() to specify whether the back-face ( GL_BACK ) or front-face ( GL_FRONT ) or both ( GL_FRONT_AND_BACK ) shall be culled.

Fragment Processing

After rasterization, we have a set of fragments for each primitive. A fragment has a position, which is aligned to the pixel-grid. It has a depth, color, normal and texture coordinates, which are interpolated from the vertices.

The fragment processing focuses on the texture and lighting, which has the greatest impact on the quality of the final image. We shall discussed texture and lighting in details in later sections.

The operations involved in the fragment processor are:

  1. The first operation in fragment processing is texturing.
  2. Next, primary and secondary colors are combined, and fog calculation may be applied.
  3. The optional scissor test, alpha test, stencil test, and depth-buffer test are carried out, if enabled.
  4. Then, the optional blending, dithering, logical operation, and bitmasking may be performed.

Output Merging

Z-Buffer and Hidden-Surface Removal

z-buffer (or depth-buffer) can be used to remove hidden surfaces (surfaces blocked by other surfaces and cannot be seen from the camera). The z-buffer of the screen is initialized to 1 (farthest) and color-buffer initialized to the background color. For each fragment (of each primitive) processed, its z-value is checked against the buffer value. If its z-value is smaller than the z-buffer, its color and z-value are copied into the buffer. Otherwise, this fragment is occluded by another object and discarded. The fragments can be processed in any order, in this algorithm.

OpenGL

In OpenGL, to use z-buffer for hidden-surface removal via depth testing, we need to:

  1. Request for z-buffer via glutInitDisplayMode() :
  2. Enable depth testing on z-buffer:
  3. Clear the z-buffer (to 1 denoting the farthest) and the color buffer (to the background color):

Alpha-Blending

Hidden-surface removal works only if the front object is totally opaque. In computer graphics, a fragment is not necessarily opaque, and could contain an alpha value specifying its degree of transparency. The alpha is typically normalized to the range of [0, 1], with 0 denotes totally transparent and 1 denotes totally opaque. If the fragment is not totally opaque, then part of its background object could show through, which is known as alpha blending. Alpha-blending and hidden-surface removal are mutually exclusive.

The simplest blending equation is as follows:

where cs is the source color, &alphas is the source alpha, cd is the destination (background) color. The 3 color channels RGB are applied independently.

For this blending equation, the order of placing the fragment is important. The fragments must be sorted from back-to-front, with the largest z-value processed first. Also, the destination alpha value is not used.

There are many other blending equations to achieve different effects.

OpenGL

In OpenGL, to perform alpha blending, we need to enable blending and disable depth-test (which performs hidden-surface removal). For example,

Source and Destination Blending Factors

In OpenGL, the glBlendFunc() function can be used to specify the so-called source and destination blending factors:

Suppose that a new object (called source) is to be blended with the existing objects in the color buffer (called destination). The source's color is (Rs, Gs, Bs, As), and the destination's color is (Rd, Gd, Bd, Ad). The source and destination color values will be weighted with respective to the source blending factor and destination blending factor and combined to produce the resultant value. Each of the RGB components will be computed independently.

For example, suppose the source blending factor for G component is p and the destination blending factor for G component is q, the resultant G component is p×Gs + q×Gd.

There are many choices of the blending factors. For example, a popular choice is:

where each component of the source is weighted by source's alpha value (As), and each component of the destination is weighted by 1-As. In this case, if the original color component's value is within [0.0, 1.0], the resultant value is guaranteed to be within this range. The drawback is that the final color depends on the order of rendering if many surfaces are added one after another (because the destination alpha value is not considered).

Another example of blending factors is:

where each component of source is weighted by source's alpha value (As), and each component of the destination is weight by 1. The value may overflow/underflow. But the final color does not depends on the order of rendering when many objects are added.

Other values for the blending factors include GL_ZERO , GL_ONE , GL_SRC_COLOR , GL_ONE_MINUS_SRC_COLOR , GL_DST_COLOR , GL_ONE_MINUS_DST_COLOR , GL_SRC_ALPHA , GL_ONE_MINUS_SRC_ALPHA , GL_DST_ALPHA , GL_ONE_MINUS_DST_ALPHA , GL_CONSTANT_COLOR , GL_ONE_MINUS_CONSTANT_COLOR , GL_CONSTANT_ALPHA , and GL_ONE_MINUS_CONSTANT_ALPHA .

The default for source blending factor is GL_ONE , and the default for destination blending factor is GL_ZERO . That is, opaque (totally non-transparent) surfaces.

The computations also explain why depth-testing shall be disabled when alpha-blending is enabled. This is because the final color will be determined by blending between source and destination colors for translucent surfaces, instead of relative depth (the color of the nearer surface) for opaque surfaces.

Lighting

Lighting refers to the handling of interactions between the light sources and the objects in the 3D scene. Lighting is one of the most important factor in producing a realistic scene.

The color that we see in the real world is the result of the interaction between the light sources and the color material surfaces. In other words, three parties are involved: viewer, light sources, and the material. When light (of a certain spectrum) from a light source strikes a surface, some gets absorbed, some is reflected or scattered. The angle of reflection depends on the angle of incidence and the surface normal. The amount of scatterness depends on the smoothness and the material of the surface. The reflected light also spans a certain color spectrum, which depends on the color spectrum of the incident light and the absorption property of the material. The strength of the reflected light depends on the position and distance of the light source and the viewer, as well as the material. The reflected light may strike other surfaces, and some is absorbed and some is reflected again. The color that we perceived about a surface is the reflected light hitting our eye. In a 2D photograph or painting, objects appear to be three-dimensional due to some small variations in colors, known as shades.

The are two classes of lighting models:

  1. Local illumination: consider only the direct lightings. The color of the surface depends on the reflectance properties of the surface and the direct lightings.
  2. Global illumination: in real world, objects received indirect lighting reflected from other objects and the environment. The global illumination model consider indirect lightings reflected from other objects in the scene. Global illumination model is complex and compute intensive.

Phong Lighting Model for Light-Material Interaction

Phong lighting model is a local illumination model, which is compute inexpensive and extensively used especially in the earlier days. It considers four types of lightings: diffuse, specular, ambient and emissive.

Consider a fragment P on a surface, four vectors are used: the light source L, the viewer V, the fragment-normal N, and the perfect reflector R. The perfect reflector R can be computed from the surface normal N and the incidence light L, according to Newton's law which states that the angle of incidence is equals to the angle of reflection.

Diffuse Light

Diffuse light models distant directional light source (such as the sun light). The reflected light is scattered equally in all directions, and appears the same to all viewers regardless of their positions, i.e., independent of viewer vector V. The strength of incident light depends on the angle between the light source L and the normal N, i.e., the dot product between L and N.

The resultant color can be computed as follows:

The strength of the incident light is max(L&sdotN, 0). We use the max function to discard the negative number, i.e., the angle is more than 90 degree. Suppose the light source has color sdiff, and the fragment has diffusion reflectance of mdiff, the resultant color c is:

where the RGB component of the color are computed independently.

Specular Light

The reflected light is concentrated along the direction of perfect reflector R. What a viewer sees depends on the angle (cosine) between V and R.

The resultant color due to specular reflection is given by:

the sh is known as the shininess factor. As sh increases, the light cone becomes narrower (because R&sdotV &le 1), the highlighted spot becomes smaller.

Ambient Light

A constant amount of light applied to every point of the scene. The resultant color is:

Emissive Light

Some surfaces may emit light. The resultant color is cem = mem

Resultant Color

The resultant color is the sum of the contribution in all the four components:

OpenGL's Lighting and Material

OpenGL provides point sources (omni-directional), spotlights (directional with cone-shaped), and ambient light (a constant factor). Light source may be located at a fixed position or infinitely far away. Each source has separate ambient, diffuse, and specular components. Each source has RGB components. The lighting calculation is performed on each of the components independently (local illumination without considering the indirect lighting). Materials are modeled in the same manner. Each type of material has a separate ambient, diffuse, and specular components, with parameters specifying the fraction that is reflected for each corresponding component of the light sources. Material may also have a emissive component.

In OpenGL, you need to enable the lighting state, and each of the light sources, identified via GL_LIGHT0 to GL_LIGHTn .

Once lighting is enable, color assigned by glColor() are no longer used. Instead, the color depends on the light-material interaction and the viewer's position.

You can use glLight() to define a light source ( GL_LIGHT0 to GL_LIGHTn ):

The default for GL_POSITION is (0, 0, 1) relative to camera coordinates, so it is behind the default camera position (0, 0, 0).

GL_LIGHT0 is special, with default value of white (1, 1, 1) for the GL_AMBIENT , GL_DIFFUSE , GL_SPECULAR components. You can enable GL_LIGHT0 right away by using its default settings. For other light IDs ( GL_LIGHT1 to GL_LIGHTn ), the default is black (0, 0, 0) for GL_AMBIENT , GL_DIFFUSE , GL_SPECULAR .

Material

Similar to light source, a material has reflectivity parameters for specular ( GL_SPECULAR ), diffuse ( GL_DIFFUSE ) and ambient ( GL_AMBIENT ) components (for each of the RGBA color components), which specifies the fraction of light reflected. A surface may also emit light ( GL_EMISSION ). A surface has a shininess parameter ( GL_SHININESS ) - the higher the value, the more concentration of reflected-light in the small area around the perfect reflector and the surface appears to be shinier. Furthermore, a surface has two faces: front and back, that may have the same or different parameters.

You can use glMaterial() function to specify these parameters for the front ( GL_FRONT ), back ( GL_BACK ), or both ( GL_FRONT_AND_BACK ) surfaces. The front face is determined by the surface normal (implicitly defined by the vertices with right-hand rule, or glNormal() function).

The default material has a gray surface (under white light), with a small amount of ambient reflection (0.2, 0.2, 0.2, 1.0), high diffuse reflection (0.8, 0.8, 0.8, 1.0), and no specular reflection (0.0, 0.0, 0.0, 1.0).

Vertex and Fragment Shaders

Global Illumination Model

Texture

In computer graphics, we often overlay (or paste or wrap) images, called textures, over the graphical objects to make them realistic.

An texture is typically a 2D image. Each element of the texture is called a texel (texture element), similar to pixel (picture element). The 2D texture coordinate (s, t) is typically normalized to [0.0, 1.0], with origin at the top-left corner, s-axis pointing right and t-axis pointing down. (Need to confirm whether this is true in OpenGL)

Texture Wrapping

Although the 2D texture coordinates is normalized to [0.0, 1.0], we can configure the behavior if the coordinates are outside the range.

The typical solutions are:

  1. Clamp the texture coordinates to [0.0, 1.0] and ignore those outside this range.
  2. Wrap (or repeat) the texture along the s- or t-axis, or both. You may set to "mirror" mode so that the textures are continuous.

In OpenGL, we use function glTexParameter() to configure the wrapping behavior for the s and t axes ( GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T ) individually. Two modes are supported: GL_REPEAT (repeat the texture pattern) and GL_CLAMP (do not repeat, but clamp to 0.0 to 1.0).

Texture Filtering

In general, the resolution of the texture image is different from the displayed fragment (or pixel). If the resolution of the texture image is smaller, we need to perform so-called magnification to magnify the texture image to match the display. On the other hand, if the resolution of texture image is larger, we perform minification.

Magnification

The commonly used methods are:

  1. Nearest Point Filtering: the texture color-value of the fragment is taken from the nearest texel. This filter leads to "blockiness" as many fragments are using the same texel.
  2. Bilinear Interpolation: the texture color-value of the fragment is formed via bilinear interpolation of the four nearest texels. This yields smoother result.
Minification

Minification is needed if the resolution of the texture image is larger than the fragment. Again, you can use the "nearest-point sampling" or "bilinear interpolation" methods.

However, these sampling methods often to the so-called "aliasing artefact", due the low sampling frequency compared with the signal. For example, a far-away object in perspective projection will look strange due to its high signal frequency.

[TODO] diagarm on aliasing artefact

Minmaping

A better approach for performing minification is called minmaping (miniature maps), where lower resolutions of the texture image are created. For example, suppose the original image is 64x64 (Level 0), we can create lower resolution images at 32x32, 16x16, 8x8, 4x4, 2x2, 1x1. The highest resolution is referred to as level 0 the next is level 1 and so on. We can then use the nearest matched-resolution texture image or perform linear interpolation between the two nearest matched-resolution texture images.

OpenGL Texture Filtering

In OpenGL, you can set the filter for magnification and minification independently.

We can use a single image and ask OpenGL to produce the lower-resolution images via command gluBuild2DMipmaps() (in place of glTexImage2d() ).

We can then specify the mipmapping filter is to be used via:

Furthermore, in perspective projection, the fast texture interpolating scheme may not handle the distortion caused by the perspective projection. The following command can be used to ask the renderer to produce a better texture image, in the expense of processing speed.


OpenGL: debugging &ldquoSingle-pass Wireframe Rendering&rdquo

I'm trying to implement the paper "Single-Pass Wireframe Rendering", which seems pretty simple, but it's giving me what I'd expect as far as thick, dark values.

The paper didn't give the exact code to figure out the altitudes, so I did it as I thought fit. The code should project the three vertices into viewport space, get their "altitudes" and send them to the fragment shader.

The fragment shader determines the distance of the closest edge and generates an edgeIntensity. I'm not sure what I'm supposed to do with this value, but since it's supposed to scale between [0,1], I multiply the inverse against my outgoing color, but it's just very weak.

I had a few questions that I'm not sure are addressed in the papers. First, should the altitudes be calculated in 2D instead of 3D? Second, they site DirectX features, where DirectX has a different viewport-space z-range, correct? Does that matter? I'm premultiplying the outgoing altitude distances by the w-value of the viewport-space coordinates as they recommend to correct for perspective projection.

The non-corrected image seems to have clear problems not correcting for the perspective on the more away-facing sides, but the perspective-corrected one has very weak values.

Can anyone see what's wrong with my code or how to go about debugging it from here?


Rendering stars in 3D space - AbsMag to OpenGL scale values - Astronomy

Accessibility Links

Click here to close this panel.

Click here to close this overlay, or press the "Escape" key on your keyboard.

The American Astronomical Society (AAS), established in 1899 and based in Washington, DC, is the major organization of professional astronomers in North America. Its membership of about 7,000 individuals also includes physicists, mathematicians, geologists, engineers, and others whose research and educational interests lie within the broad spectrum of subjects comprising contemporary astronomy. The mission of the AAS is to enhance and share humanity's scientific understanding of the universe.

Click here to close this overlay, or press the "Escape" key on your keyboard.

The Institute of Physics (IOP) is a leading scientific society promoting physics and bringing physicists together for the benefit of all. It has a worldwide membership of around 50 000 comprising physicists from all sectors, as well as those with an interest in physics. It works to advance physics research, application and education and engages with policy makers and the public to develop awareness and understanding of physics. Its publishing company, IOP Publishing, is a world leader in professional scientific communications.


Drawing in Space: Geometric Primitives and Buffers in OpenGL

You learned from Chapter 2 that OpenGL does not render (draw) these primitives directly on the screen. Instead, rendering is done in a buffer, which is later swapped to the screen. We refer to these two buffers as the front (the screen) and back color buffers. By default, OpenGL commands are rendered into the back buffer, and when you call glutSwapBuffers (or your operating system–specific buffer swap function), the front and back buffers are swapped so that you can see the rendering results. You can, however, render directly into the front buffer if you want. This capability can be useful for displaying a series of drawing commands so that you can see some object or shape actually being drawn. There are two ways to do this both are discussed in the following section.

Using Buffer Targets

The first way to render directly into the front buffer is to just tell OpenGL that you want drawing to be done there. You do this by calling the following function:

Specifying GL_FRONT causes OpenGL to render to the front buffer, and GL_BACK moves rendering back to the back buffer. OpenGL implementations can support more than just a single front and back buffer for rendering, such as left and right buffers for stereo rendering, and auxiliary buffers. These other buffers are documented further in the reference section at the end of this chapter.

The second way to render to the front buffer is to simply not request double-buffered rendering when OpenGL is initialized. OpenGL is initialized differently on each OS platform, but with GLUT, we initialize our display mode for RGB color and double-buffered rendering with the following line of code:

To get single-buffered rendering, you simply omit the bit flag GLUT_DOUBLE, as shown here:

When you do single-buffered rendering, it is important to call either glFlush or glFinish whenever you want to see the results actually drawn to screen. A buffer swap implicitly performs a flush of the pipeline and waits for rendering to complete before the swap actually occurs. We'll discuss the mechanics of this process in more detail in Chapter 11, "It's All About the Pipeline: Faster Geometry Throughput."

Listing 3.12 shows the drawing code for the sample program SINGLE. This example uses a single rendering buffer to draw a series of points spiraling out from the center of the window. The RenderScene() function is called repeatedly and uses static variables to cycle through a simple animation. The output of the SINGLE sample program is shown in Figure 3.35.

Figure 3.35 Output from the single-buffered rendering example.

Listing 3.12 Drawing Code for the SINGLE Sample

Manipulating the Depth Buffer

The color buffers are not the only buffers that OpenGL renders into. In the preceding chapter, we mentioned other buffer targets, including the depth buffer. However, the depth buffer is filled with depth values instead of color values. Requesting a depth buffer with GLUT is as simple as adding the GLUT_DEPTH bit flag when initializing the display mode:

You've already seen that enabling the use of the depth buffer for depth testing is as easy as calling the following:

Even when depth testing is not enabled, if a depth buffer is created, OpenGL will write corresponding depth values for all color fragments that go into the color buffer. Sometimes, though, you may want to temporarily turn off writing values to the depth buffer as well as depth testing. You can do this with the function glDepthMask:

Setting the mask to GL_FALSE disables writes to the depth buffer but does not disable depth testing from being performed using any values that have already been written to the depth buffer. Calling this function with GL_TRUE re-enables writing to the depth buffer, which is the default state. Masking color writes is also possible but a bit more involved, and will be discussed in Chapter 6.

Cutting It Out with Scissors

One way to improve rendering performance is to update only the portion of the screen that has changed. You may also need to restrict OpenGL rendering to a smaller rectangular region inside the window. OpenGL allows you to specify a scissor rectangle within your window where rendering can take place. By default, the scissor rectangle is the size of the window, and no scissor test takes place. You turn on the scissor test with the ubiquitous glEnable function:

You can, of course, turn off the scissor test again with the corresponding glDisable function call. The rectangle within the window where rendering is performed, called the scissor box, is specified in window coordinates (pixels) with the following function:

The x and y parameters specify the lower-left corner of the scissor box, with width and height being the corresponding dimensions of the scissor box. Listing 3.13 shows the rendering code for the sample program SCISSOR. This program clears the color buffer three times, each time with a smaller scissor box specified before the clear. The result is a set of overlapping colored rectangles, as shown in Figure 3.36.

Listing 3.13 Using the Scissor Box to Render a Series of Rectangles

Figure 3.36 Shrinking scissor boxes.

Using the Stencil Buffer

Using the OpenGL scissor box is a great way to restrict rendering to a rectangle within the window. Frequently, however, we want to mask out an irregularly shaped area using a stencil pattern. In the real world, a stencil is a flat piece of cardboard or other material that has a pattern cut out of it. Painters use the stencil to apply paint to a surface using the pattern in the stencil. Figure 3.37 shows how this process works.

Figure 3.37 Using a stencil to paint a surface in the real world.

In the OpenGL world, we have the stencil buffer instead. The stencil buffer provides a similar capability but is far more powerful because we can create the stencil pattern ourselves with rendering commands. To use OpenGL stenciling, we must first request a stencil buffer using the platform-specific OpenGL setup procedures. When using GLUT, we request one when we initialize the display mode. For example, the following line of code sets up a double-buffered RGB color buffer with stencil:

The stencil operation is relatively fast on modern hardware-accelerated OpenGL implementations, but it can also be turned on and off with glEnable/glDisable. For example, we turn on the stencil test with the following line of code:

With the stencil test enabled, drawing occurs only at locations that pass the stencil test. You set up the stencil test that you want to use with this function:

The stencil function that you want to use, func, can be any one of these values: GL_NEVER, GL_ALWAYS, GL_LESS, GL_LEQUAL, GL_EQUAL, GL_GEQUAL, GL_GREATER, and GL_NOTEQUAL. These values tell OpenGL how to compare the value already stored in the stencil buffer with the value you specify in ref. These values correspond to never or always passing, passing if the reference value is less than, less than or equal, greater than or equal, greater than, and not equal to the value already stored in the stencil buffer, respectively. In addition, you can specify a mask value that is bit-wise ANDed with both the reference value and the value from the stencil buffer before the comparison takes place.

You need to realize that the stencil buffer may be of limited precision. Stencil buffers are typically only between 1 and 8 bits deep. Each OpenGL implementation may have its own limits on the available bit depth of the stencil buffer, and each operating system or environment has its own methods of querying and setting this value. In GLUT, you just get the most stencil bits available, but for finer-grained control, you need to refer to the operating system–specific chapters later in the book. Values passed to ref and mask that exceed the available bit depth of the stencil buffer are simply truncated, and only the maximum number of least significant bits is used.

Creating the Stencil Pattern

You now know how the stencil test is performed, but how are values put into the stencil buffer to begin with? First, we must make sure that the stencil buffer is cleared before we start any drawing operations. We do this in the same way that we clear the color and depth buffers with glClear—using the bit mask GL_STENCIL_BUFFER_BIT. For example, the following line of code clears the color, depth, and stencil buffers simultaneously:

The value used in the clear operation is set previously with a call to

When the stencil test is enabled, rendering commands are tested against the value in the stencil buffer using the glStencilFunc parameters we just discussed. Fragments (color values placed in the color buffer) are either written or discarded based on the outcome of that stencil test. The stencil buffer itself is also modified during this test, and what goes into the stencil buffer depends on how you've called the glStencilOp function:

These values tell OpenGL how to change the value of the stencil buffer if the stencil test fails (fail), and even if the stencil test passes, you can modify the stencil buffer if the depth test fails (zfail) or passes (zpass). The valid values for these arguments are GL_KEEP, GL_ZERO, GL_REPLACE, GL_INCR, GL_DECR, GL_INVERT, GL_INCR_WRAP, and GL_DECR_WRAP. These values correspond to keeping the current value, setting it to zero, replacing with the reference value (from glStencilFunc), incrementing or decrementing the value, inverting it, and incrementing/decrementing with wrap, respectively. Both GL_INCR and GL_DECR increment and decrement the stencil value but are clamped to the minimum and maximum value that can be represented in the stencil buffer for a given bit depth. GL_INCR_WRAP and likewise GL_DECR_WRAP simply wrap the values around when they exceed the upper and lower limits of a given bit representation.

In the sample program STENCIL, we create a spiral line pattern in the stencil buffer, but not in the color buffer. The bouncing rectangle from Chapter 2 comes back for a visit, but this time, the stencil test prevents drawing of the red rectangle anywhere the stencil buffer contains a 0x1 value. Listing 3.14 shows the relevant drawing code.

Listing 3.14 Rendering Code for the STENCIL Sample

The following two lines cause all fragments to fail the stencil test. The values of ref and mask are irrelevant in this case and are not used.

The arguments to glStencilOp, however, cause the value in the stencil buffer to be written (incremented actually), regardless of whether anything is seen on the screen. Following these lines, a white spiral line is drawn, and even though the color of the line is white so you can see it against the blue background, it is not drawn in the color buffer because it always fails the stencil test (GL_NEVER). You are essentially rendering only to the stencil buffer!

Next, we change the stencil operation with these lines:

Now, drawing will occur anywhere the stencil buffer is not equal (GL_NOTEQUAL) to 0x1, which is anywhere onscreen that the spiral line is not drawn. The subsequent call to glStencilOp is optional for this example, but it tells OpenGL to leave the stencil buffer alone for all future drawing operations. Although this sample is best seen in action, Figure 3.38shows an image of what the bounding red square looks like as it is "stenciled out."

Just like the depth buffer, you can also mask out writes to the stencil buffer by using the function glStencilMask:

Setting the mask to false does not disable stencil test operations but does prevent any operation from writing values into the stencil buffer.

Figure 3.38 The bouncing red square with masking stencil pattern.


The Basic Graphics Pipeline

A primitive in OpenGL is simply a collection of vertices, hooked together in a predefined way. A single point for example is a primitive that requires exactly one vertex. Another example is a triangle, a primitive made up of three vertices. Before we talk about the different kinds of primitives, let's take a look first at how a primitive is assembled out of individual vertices. The basic rendering pipeline takes three vertices and turns them into a triangle. It may also apply color, one or more textures, and move them about. This pipeline is also programmable you actually write two programs that are executed by the graphics hardware to process the vertex data and fill in the pixels (we call them fragments because actually there can be more than one fragment per pixel, but more on that later) on-screen. To understand how this basic process works in OpenGL, let's take a look at a simplified version of the OpenGL rendering pipeline, shown here in Figure 3.1.

Client-Server

First notice that we have divided the pipeline in half. On the top is the client side, and on the bottom is the server. Basic client-server design is applied when the client side of the pipeline is separated from the server side functionally. In OpenGL's case, the client side is code that lives in the main CPU's memory and is executed within the application program, or within the driver in main system memory. The driver assembles rendering commands and data and sends to the server for execution. On a typical desktop computer, the server is across some system bus and is in fact the hardware and memory on the graphics card.

Client and server also function asynchronously, meaning they are both independent pieces of software or hardware, or both. To achieve maximum performance, you want both sides to be busy as much as possible. The client is continually assembling blocks of data and commands into buffers that are then sent to the server for execution. The server then executes those buffers, while at the same time the client is getting ready to send the next bit of data or information for rendering. If the server ever runs out of work while waiting on the client, or if the client has to stop and wait for the server to become ready for more commands or data, we call this a pipeline stall. Pipeline stalls are the bane of performance programmers, and we really don't want CPUs or GPUs standing around idle waiting for work to do.

Shaders

The two biggest boxes in Figure 3.1 are for the vertex shader and the fragment shader. A shader is a program written in GLSL (we get into GLSL programming in Chapter 6, "Thinking Outside the Box: Nonstock Shaders"). GLSL looks a whole lot like C in fact these programs even start with the familiar main function. These shaders must be compiled and linked together (again much like a C or C++ program) from source code before they can be used. The final ready-to-use shader program is then made up of the vertex shader as the first stage of processing and the fragment shader as the second stage of processing. Note that we are taking a simplified approach here. There is actually something called a geometry shader that can (optionally) fit between here, as well as all sorts of feedback mechanisms for moving data back and forth. There are also some post fragment processing features such as blending, stencil, and depth testing, which we also cover later.

The vertex shader processes incoming data from the client, applying transformations, or doing other types of math to calculate lighting effects, displacement, color values, and so on. To render a triangle with three vertices, the vertex shader is executed three times, once for each vertex. On today's hardware, there are multiple execution units running simultaneously, which means all three vertices are processed simultaneously. Graphics processors today are massively parallel computers. Don't be fooled by clock speed when comparing them to CPUs. They are orders of magnitude faster at graphics operations.

Three vertices are now ready to be rasterized. The primitive assembly box in Figure 3.1 is meant to show that the three vertices are then put together and the triangle is rasterized, fragment by fragment. Each fragment is filled in by executing the fragment shader, which outputs the final color value you will see on-screen. Again, today's hardware is massively parallel, and it is quite possible a hundred or more of these fragment programs could be executing simultaneously.

Of course to get anything to happen, you must feed these shaders some data. There are three ways in which you the programmer pass data to OpenGL shaders for rendering: attributes, uniforms, and textures.

Attributes

An attribute is a data element that changes per vertex. In fact, the vertex position itself is actually an attribute. Attributes can be floating-point, integer, or boolean data, and attributes are always stored internally as a four component vector, even if you don't use all four components. For example, a vertex position might be stored as an x, a y, and a z value. That would be three out of the four components. Internally, OpenGL makes the fourth component (W if you just have to know) a one. In fact, if you are drawing just in the xy plane (and ignoring z), then the third component will be automatically made a zero, and again the fourth will be made a one. To complete the pattern, if you send down only a single floating-point value as an attribute, the second and third components are zero, while the fourth is still made a one. This default behavior applies to any attribute you set up, not just vertex positions, so be careful when you don't use all four components available to you. Other things you might change per vertex besides the position in space are texture coordinates, color values, and surface normals used for lighting calculations. Attributes, however, can have any meaning you want in the vertex program you are in control.

Attributes are copied from a pointer to local client memory to a buffer that is stored (most likely) on the graphics hardware. Attributes are only processed by the vertex shader and have no meaning to the fragment shader. Also, to clarify that attributes change per vertex, this does not mean they cannot have duplicate values, only that there is actually one stored value per vertex. Usually, they are different of course, but it is possible you could have a whole array of the same values. This would be very wasteful, however, and if you needed a data element that was the same for all the attributes in a single batch, there is a better way.

Uniforms

A uniform is a single value that is, well, uniform for the entire batch of attributes that is, it doesn't change. You set the values of uniform variables usually just before you send the command to render a primitive batch. Uniforms can be used for virtually an unlimited number of uses. You could set a single color value that is applied to an entire surface. You could set a time value that you change every time you render to do some type of vertex animation (note the uniform changes once per batch, not once per vertex here). One of the most common uses of uniforms is to set transformation matrices in the vertex shader (this is almost the entire purpose of Chapter 4, "Basic Transformations: A Vector/Matrix Primer").

Like attributes, uniform values can be floating-point, integer, or boolean in nature, but unlike attributes, you can have uniform variables in both the vertex and the fragment shader. Uniforms can be scalar or vector types, and you can have matrix uniforms. Technically, you can also have matrix attributes, where each column of the matrix takes up one of those four component vector slots, but this is not often done. There are even some special uniform setting functions we discuss in Chapter 5, "Basic Texturing," that deal with this.

Texture

A third type of data that you can pass to a shader is texture data. It is a bit early to try and go into much detail about how textures are handled and passed to a shader, but you know from Chapter 1, "Introduction to 3D Graphics and OpenGL," basically what a texture is. Texture values can be sampled and filtered from both the vertex and the fragment shader. Fragment shaders typically sample a texture to apply image data across the surface of a triangle. Texture data, however, is more useful than just to represent images. Most image file formats store color components in unsigned byte values (8 bits per color channel), but you can also have floating-point textures. This means potentially any large block of floating-point data, such as a large lookup table of an expensive function, could be passed to a shader in this way.

The fourth type of data shown in the diagram in Figure 3.1 are outs. An out variable is declared as an output from one shader stage and declared as an in in the subsequent shader stage. Outs can be passed simply from one stage to the next, or they may be interpolated in various ways. Client code has no access to these internal variables, but rather they are declared in both the vertex and the fragment shader (and possibly the optional geometry shader). The vertex shader assigns a value to the out variable, and the value is constant, or can be interpolated between vertexes as the primitive is rasterized. The fragment shader's corresponding in variable of the same name receives this constant or interpolated value. In Chapter 6, we see how this works in more detail.


Incorporating interactive three-dimensional graphics in astronomy research papers ☆

Most research data collections created or used by astronomers are intrinsically multi-dimensional. In contrast, all visual representations of data presented within research papers are exclusively two-dimensional (2D). We present a resolution of this dichotomy that uses a novel technique for embedding three-dimensional (3D) visualisations of astronomy data sets in electronic-format research papers. Our technique uses the latest Adobe Portable Document Format extensions together with a new version of the S2PLOT programming library. The 3D models can be easily rotated and explored by the reader and, in some cases, modified. We demonstrate example applications of this technique including: 3D figures exhibiting subtle structure in redshift catalogues, colour-magnitude diagrams and halo merger trees 3D isosurface and volume renderings of cosmological simulations and 3D models of instructional diagrams and instrument designs.


Transformations

The last step is to take the input positions, specified in a way that’s convenient for modeling, and output positions OpenGL can actually rasterize correctly. That means generating final positions that are in clip space, which is basically a 3D cube ranging from -1 to 1 in each direction. Any point inside this cube will be rasterized.

However, there are actually a few related concepts at play:

Vertex shader input data. The data here can be in any form, and doesn’t even have to be coordinates. However, in practice, the input data is typically composed of 3D points in some coordinates that are easy for modeling, in which case the points are considered to be in model space.

Clip space. This is the output of the vertex shader, in homogeneous coordinates.

Normalized Device Coordinates. The same points in clip space, but converted from homogeneous coordinates to 3D Cartesian coordinates by doing a w-divide. Only points within a 2×2×2 cube centered around the origin (between -1 and 1 on all axes) will be rendered.

Screen space. These are 2D points where the origin, which used to be in the center of the space, is the bottom-left of the canvas. The z-coordinate is only used for depth sorting. These are the coordinates that end up being used for rasterization.

The four spaces above. The highlighted part is under our control.

The reason for my breakdown is many tutorials talk about model, world, and view space alongside clip space, NDC and screen space. But it’s important to realize that everything before clip space is up to you. You decide what the input data looks like and how it gets converted into clip space. That probably means specifying the inputs in model space, then sequentially applying the local and camera transformations. But if you want to understand the OpenGL APIs, it’s important to first understand what’s under your control and what the OpenGL provides for you.

Here’s the cool part: if you wanted, you could pre-compute the coordinates directly in clip space and your vertex shader would just pass along the input positions to OpenGL. This is exactly what our first demo did! But, doing so per-frame is expensive, since the transformations we usually want to do (matrix multiplications) are much faster on the GPU.

Updating the shader to take in transformations

To convert from the model space coordinates specified in the VBO to clip space, let’s break up apart the transformation into two parts:

A model to world space transformation.

A perspective projection transformation.

This breakdown works for our purposes, because we’ll rotate the cube over time will leaving the perspective projection the same. In real-world applications, you’d slot in a camera (or view) transformation in between the two steps to allow for a moving camera without having to modify the entire world.

Start by adding two uniform variables to the shader. Remember that a uniform is a piece of input data that is specified for an entire draw call and stays constant over all the vertices in that draw call. In our case, at any given moment, the transformations are the same for all the vertices of the triangles that make up our cube.

The two variables represent the two transformations, and therefore are 4×4 matrices. We can use the transformations by converting the input position into homogeneous coordinates, then pre-multiplying by the transformation matrices.

Specifying a perspective projection matrix

Let’s populate one of the two matrices above, namely the projection matrix. Remember that, because of the way our vertex shader is set up, the projection matrix will take points in world space (where the input points end up after the model transformation) and put them into clip space. That means the projection matrix has two responsibilities:

Make sure anything we want rendered gets put into the the 2×2×2 cube centered around the origin. That means every vertex that we want rendered should be have coordinates in the range of -1 to 1.

If we want perspective, set up the w-coordinate so that dividing by it performs a perspective divide.

I won’t go into the math of the perspective projection matrix, as it’s covered in other places. What’s important is we’ll define the matrix in terms of some parameters, like the distance to the near and far clipping panes, and the viewing angle. Let’s look at some code:

To make sense of this matrix, notice the following:

Most importantly, matrices in OpenGL are defined in column-major order. That means the first four values above are actually the first column. Similarly, every fourth value contributes to a single row, meaning the bottom-most row is actually (0, 0, -1, 0) above.

The first three rows use the bounds defining the perspective transform to bring in the x, y and z-coordinates into the correct range.

The last row multiplies the z-coordinate by -1 and places it into the output w-coordinate. This is what causes the final “divide by w” to scale down objects that have a more negative z-coordinate (remembering that negative z values point away from us). If we were doing an orthographic projection, we would leave the output w-coordinate as 1.

Now, that we have a matrix, in the correct format, we can associate it with the variable in the shader. This works somewhat similarly to how an array is associated with an attribute variable. However, because a uniform contains essentially one piece of data to be shared by all vertices (as opposed to one piece of data per vertex), there’s no intermediate buffer. Just grab a handle to the variable and send it the data.

Make sure to do this outside the loop function. We won’t change the projection throughout the program execution, so we only need to send this matrix once!

Specifying the model transformation matrix

A traditional projection matrix assumes points in “view space”, that is in the view of a camera. To keep things simpler, we’ll assume the camera is at the origin and looking at the negative z direction (which is what view space essentially is). So now, the goal is to transform the shader input positions into points that are within the viewable area defined by the perspective projection’s field of view and clipping planes.

I chose the following transformations, in the following order:

  1. Scale by a factor of 2.
  2. Rotate around the Y axis by some angle.
  3. Rotate around the X axis by that same angle.
  4. Translate along the Z axis -9 units.

The choice was pretty arbitrary, but this resulted in a cube that had most sides visible (the very back face never came into view), filled up most of the canvas, and was relative easily to calculate by hand.

I took the usual transformation matrices for these transformations (you can find them online), multiplied them in reverse order, and finally transposed them to get them into column-major order. I defined the matrices with some variables so I could easily vary the angle of rotation:

Notice that the model transformation matrix is defined inside the loop function, allowing the use of the time parameter to define the angle of rotation. Otherwise, associating the resulting matrix with the shader variable is exactly the same as before.

And with all that, we have our rotating cube! (I put it behind a play button to prevent the animation from using up your battery while you read the article.)

The entire code is embedded within this post. You’ll find a large portion of the code is defining the vertex shader input data.


Moving Bitmaps In 3D Space

Welcome to Tutorial 9. By now you should have a very good understanding of OpenGL. You've learned everything from setting up an OpenGL Window, to texture mapping a spinning object while using lighting and blending. This will be the first semi-advanced tutorial. You'll learn the following: Moving bitmaps around the screen in 3D, removing the black pixels around the bitmap (using blending), adding color to a black & white texture and finally you'll learn how to create fancy colors and simple animation by mixing different colored textures together.

We'll be modifying the code from lesson one for this tutorial. We'll start off by adding a few new variables to the beginning of the program. I'll rewrite the entire section of code so it's easier to see where the changes are being made.

The following lines are new. twinkle and tp are BOOLean variables meaning they can be TRUE or FALSE. twinkle will keep track of whether or not the twinkle effect has been enabled. tp is used to check if the 'T' key has been pressed or released. (pressed tp=TRUE, relased tp=FALSE).

num will keep track of how many stars we draw to the screen. It's defined as a CONSTant. This means it can never change within the code. The reason we define it as a constant is because you can not redefine an array. So if we've set up an array of only 50 stars and we decided to increase num to 51 somewhere in the code, the array can not grow to 51, so an error would occur. You can change this value to whatever you want it to be in this line only. Don't try to change the value of num later on in the code unless you want disaster to occur.

Now we create a structure. The word structure sounds intimidating, but it's not really. A structure is a group simple data (variables, etc) representing a larger similar group. In english :) We know that we're keeping track of stars. You'll see that the 7th line below is stars. We know each star will have 3 values for color, and all these values will be integer values. The 3rd line int r,g,b sets up 3 integer values. One for red (r), one for green (g), and one for blue (b). We know each star will be a different distance from the center of the screen, and can be place at one of 360 different angles from the center. If you look at the 4th line below, we make a floating point value called dist. This will keep track of the distance. The 5th line creates a floating point value called angle. This will keep track of the stars angle.

So now we have this group of data that describes the color, distance and angle of a star on the screen. Unfortunately we have more than one star to keep track of. Instead of creating 50 red values, 50 green values, 50 blue values, 50 distance values and 50 angle values, we just create an array called star. Each number in the star array will hold all of the information in our structure called stars. We make the star array in the 8th line below. If we break down the 8th line: stars star[num]. This is what we come up with. The type of array is going to be stars. stars is a structure. So the array is going to hold all of the information in the structure. The name of the array is star. The number of arrays is [num]. So because num=50, we now have an array called star. Our array stores the elements of the structure stars. Alot easier than keeping track of each star with seperate variables. Which would be a very stupid thing to do, and would not allow us to add remove stars by changing the const value of num.

Next we set up variables to keep track of how far away from the stars the viewer is (zoom), and what angle we're seeing the stars from (tilt). We make a variable called spin that will spin the twinkling stars on the z axis, which makes them look like they are spinning at their current location.

loop is a variable we'll use in the program to draw all 50 stars, and texture[1] will be used to store the one b&w texture that we load in. If you wanted more textures, you'd increase the value from one to however many textures you decide to use.

Right after the line above we add code to load in our texture. I shouldn't have to explain the code in great detail. It's the same code we used to load the textures in lesson 6, 7 and 8. The bitmap we load this time is called star.bmp. We generate only one texture using glGenTextures(1, &texture[0]). The texture will use linear filtering.

This is the section of code that loads the bitmap (calling the code above) and converts it into a textures. Status is used to keep track of whether or not the texture was loaded and created.

Now we set up OpenGL to render the way we want. We're not going to be using Depth Testing in this project, so make sure if you're using the code from lesson one that you remove glDepthFunc(GL_LEQUAL) and glEnable(GL_DEPTH_TEST) otherwise you'll see some very bad results. We're using texture mapping in this code however so you'll want to make sure you add any lines that are not in lesson 1. You'll notice we're enabling texture mapping, along with blending.

The following code is new. It sets up the starting angle, distance, and color of each star. Notice how easy it is to change the information in the structure. The loop will go through all 50 stars. To change the angle of star[1] all we have to do is say star[1].angle= . It's that simple!

I calculate the distance by taking the current star (which is the value of loop) and dividing it by the maximum amount of stars there can be. Then I multiply the result by 5.0f. Basically what this does is moves each star a little bit farther than the previous star. When loop is 50 (the last star), loop divided by num will be 1.0f. The reason I multiply by 5.0f is because 1.0f*5.0f is 5.0f. 5.0f is the very edge of the screen. I don't want stars going off the screen so 5.0f is perfect. If you set the zoom further into the screen you could use a higher number than 5.0f, but your stars would be alot smaller (because of perspective).

You'll notice that the colors for each star are made up of random values from 0 to 255. You might be wondering how we can use such large values when normally the colors are from 0.0f to 1.0f. When we set the color we'll use glColor4ub instead of glColor4f. ub means Unsigned Byte. A byte can be any value from 0 to 255. In this program it's easier to use bytes than to come up with a random floating point value.

The Resize code is the same, so we'll jump to the drawing code. If you're using the code from lesson one, delete the DrawGLScene code, and just copy what I have below. There's only 2 lines of code in lesson one anyways, so there's not a lot to delete.

Now we move the star. The star starts off in the middle of the screen. The first thing we do is spin the scene on the x axis. If we spin 90 degrees, the x axis will no longer run left to right, it will run into and out of the screen. As an example to help clarify. Imagine you were in the center of a room. Now imagine that the left wall had -x written on it, the front wall had -z written on it, the right wall had +x written on it, and the wall behind you had +z written on it. If the room spun 90 degrees to the right, but you did not move, the wall in front of you would no longer say -z it would say -x. All of the walls would have moved. -z would be on the right, +z would be on the left, -x would be in front, and +x would be behind you. Make sense? By rotating the scene, we change the direction of the x and z planes.

The second line of code moves to a positive value on the x plane. Normally a positive value on x would move us to the right side of the screen (where +x usually is), but because we've rotated on the y plane, the +x could be anywhere. If we rotated by 180 degrees, it would be on the left side of the screen instead of the right. So when we move forward on the positive x plane, we could be moving left, right, forward or backward.

Now for some tricky code. The star is actually a flat texture. Now if you drew a flat quad in the middle of the screen and texture mapped it, it would look fine. It would be facing you like it should. But if you rotated on the y axis by 90 degrees, the texture would be facing the right and left sides of the screen. All you'd see is a thin line. We don't want that to happen. We want the stars to face the screen all the time, no matter how much we rotate and tilt the screen.

We do this by cancelling any rotations that we've made, just before we draw the star. You cancel the rotations in reverse order. So above we tilted the screen, then we rotated to the stars current angle. In reverse order, we'd un-rotate (new word) the stars current angle. To do this we use the negative value of the angle, and rotate by that. So if we rotated the star by 10 degrees, rotating it back -10 degrees will make the star face the screen once again on that axis. So the first line below cancels the rotation on the y axis. Then we need to cancel the screen tilt on the x axis. To do that we just tilt the screen by -tilt. After we've cancelled the x and y rotations, the star will face the screen completely.

If twinkle is TRUE, we'll draw a non-spinning star on the screen. To get a different color, we take the maximum number of stars (num) and subtract the current stars number (loop), then subtract 1 because our loop only goes from 0 to num-1. If the result was 10 we'd use the color from star number 10. That way the color of the two stars is usually different. Not a good way to do it, but effective. The last value is the alpha value. The lower the value, the darker the star is.

If twinkle is enabled, each star will be drawn twice. This will slow down the program a little depending on what type of computer you have. If twinkle is enabled, the colors from the two stars will mix together creating some really nice colors. Also because this star does not spin, it will appear as if the stars are animated when twinkling is enabled. (look for yourself if you don't understand what I mean).

Notice how easy it is to add color to the texture. Even though the texture is black and white, it will become whatever color we select before we draw the texture. Also take note that we're using bytes for the color values rather than floating point numbers. Even the alpha value is a byte.

Now we draw the main star. The only difference from the code above is that this star is always drawn, and this star spins on the z axis.

Here's where we do all the movement. We spin the normal stars by increasing the value of spin. Then we change the angle of each star. The angle of each star is increased by loop/num. What this does is spins the stars that are farther from the center faster. The stars closer to the center spin slower. Finally we decrease the distance each star is from the center of the screen. This makes the stars look as if they are being sucked into the middle of the screen.

The lines below check to see if the stars have hit the center of the screen or not. When a star hits the center of the screen it's given a new color, and is moved 5 units from the center, so it can start it's journey back to the center as a new star.

Now we're going to add code to check if any keys are being pressed. Go down to WinMain(). Look for the line SwapBuffers(hDC). We'll add our key checking code right under that line. lines of code.

The lines below check to see if the T key has been pressed. If it has been pressed and it's not being held down the following will happen. If twinkle is FALSE, it will become TRUE. If it was TRUE, it will become FALSE. Once T is pressed tp will become TRUE. This prevents the code from running over and over again if you hold down the T key.

The code below checks to see if you've let go of the T key. If you have, it makes tp=FALSE. Pressing the T key will do nothing unless tp is FALSE, so this section of code is very important.

The rest of the code checks to see if the up arrow, down arrow, page up or page down keys are being pressed.

Like all the previous tutorials, make sure the title at the top of the window is correct.

In this tutorial I have tried to explain in as much detail how to load in a gray scale bitmap image, remove the black space around the image (using blending), add color to the image, and move the image around the screen in 3D. I've also shown you how to create beautiful colors and animation by overlapping a second copy of the bitmap on top of the original bitmap. Once you have a good understanding of everything I've taught you up till now, you should have no problems making 3D demos ofyour own. All the basics have been covered!

Jeff Molofee (NeHe)

* DOWNLOAD Visual C++ Code For This Lesson.

* DOWNLOAD Borland C++ Builder 6 Code For This Lesson. ( Conversion by Christian Kindahl )
* DOWNLOAD C# Code For This Lesson. ( Conversion by Brian Holley )
* DOWNLOAD Code Warrior 5.3 Code For This Lesson. ( Conversion by Scott Lupton )
* DOWNLOAD Cygwin Code For This Lesson. ( Conversion by Stephan Ferraro )
* DOWNLOAD D Language Code For This Lesson. ( Conversion by Familia Pineda Garcia )
* DOWNLOAD Delphi Code For This Lesson. ( Conversion by Michal Tucek )
* DOWNLOAD Dev C++ Code For This Lesson. ( Conversion by Dan )
* DOWNLOAD Euphoria Code For This Lesson. ( Conversion by Evan Marshall )
* DOWNLOAD Game GLUT Code For This Lesson. ( Conversion by Milikas Anastasios )
* DOWNLOAD Irix Code For This Lesson. ( Conversion by Lakmal Gunasekara )
* DOWNLOAD Java Code For This Lesson. ( Conversion by Jeff Kirby )
* DOWNLOAD Jedi-SDL Code For This Lesson. ( Conversion by Dominique Louis )
* DOWNLOAD JoGL Code For This Lesson. ( Conversion by Abdul Bezrati )
* DOWNLOAD LCC Win32 Code For This Lesson. ( Conversion by Robert Wishlaw )
* DOWNLOAD Linux Code For This Lesson. ( Conversion by Richard Campbell )
* DOWNLOAD Linux/GLX Code For This Lesson. ( Conversion by Mihael Vrbanec )
* DOWNLOAD Linux/SDL Code For This Lesson. ( Conversion by Ti Leggett )
* DOWNLOAD LWJGL Code For This Lesson. ( Conversion by Mark Bernard )
* DOWNLOAD Mac OS Code For This Lesson. ( Conversion by Anthony Parker )
* DOWNLOAD Mac OS X/Cocoa Code For This Lesson. ( Conversion by Bryan Blackburn )
* DOWNLOAD MASM Code For This Lesson. (Conversion by Nico (Scalp) )
* DOWNLOAD Visual C++ / OpenIL Code For This Lesson. ( Conversion by Denton Woods )
* DOWNLOAD Power Basic Code For This Lesson. ( Conversion by Angus Law )
* DOWNLOAD Pelles C Code For This Lesson. ( Conversion by Pelle Orinius )
* DOWNLOAD Python Code For This Lesson. ( Conversion by Ryan Showalter )
* DOWNLOAD Solaris Code For This Lesson. ( Conversion by Lakmal Gunasekara )
* DOWNLOAD Visual Basic Code For This Lesson. ( Conversion by Peter De Tagyos )
* DOWNLOAD Visual Fortran Code For This Lesson. ( Conversion by Jean-Philippe Perois )
* DOWNLOAD Visual Studio .NET Code For This Lesson. ( Conversion by Grant James )