Discover three.js is now open source!
Word Count:4168, reading time: ~20minutes

Your First three.js Scene: Hello, Cube!

In this chapter, we’ll create the Hello World of three.js apps: a simple white cube. Since we’ve already set up a simple webpage, as described in the last chapter, all we need to do is write a couple of lines of JavaScript in src/main.js and our app will spring to life. We’ll introduce quite a bit of theory along the way, but the actual code is short. Below is what this file will look like by the end of the chapter. Not counting the import statement and comments, there are under twenty lines of code in total. That’s all it takes to create a simple “Hello Cube!” three.js app.

main.js: final result
    import {
  BoxBufferGeometry,
  Color,
  Mesh,
  MeshBasicMaterial,
  PerspectiveCamera,
  Scene,
  WebGLRenderer,
} from 'three';

// Get a reference to the container element that will hold our scene
const container = document.querySelector('#scene-container');

// create a Scene
const scene = new Scene();

// Set the background color
scene.background = new Color('skyblue');

// Create a camera
const fov = 35; // AKA Field of View
const aspect = container.clientWidth / container.clientHeight;
const near = 0.1; // the near clipping plane
const far = 100; // the far clipping plane

const camera = new PerspectiveCamera(fov, aspect, near, far);

// every object is initially created at ( 0, 0, 0 )
// move the camera back so we can view the scene
camera.position.set(0, 0, 10);

// create a geometry
const geometry = new BoxBufferGeometry(2, 2, 2);

// create a default (white) Basic material
const material = new MeshBasicMaterial();

// create a Mesh containing the geometry and material
const cube = new Mesh(geometry, material);

// add the mesh to the scene
scene.add(cube);

// create the renderer
const renderer = new WebGLRenderer();

// next, set the renderer to the same size as our container element
renderer.setSize(container.clientWidth, container.clientHeight);

// finally, set the pixel ratio so that our scene will look good on HiDPI displays
renderer.setPixelRatio(window.devicePixelRatio);

// add the automatically created <canvas> element to the page
container.append(renderer.domElement);

// render, or 'create a still image', of the scene
renderer.render(scene, camera);


  

Click the toggle on the top left of the editor to see this code in action, or, if you prefer to work locally, you can click the button to download a zip archive containing all the files from the editor. If any of the JavaScript here is unfamiliar to you, refer to A.2: JavaScript Reference and A.3: The Document Object Model and DOM API in the appendices.

The Components of a Real-Time 3D App

Before we get started on the code, let’s look at the basic components that make up every three.js app. First, there’s the scene, camera, and renderer, which form the basic scaffolding of the application. Next, there’s the HTML <canvas> element, where we see the results. Last but not least, there’s a visible object such as a mesh. With the exception of the canvas (which is specific to the browser), an equivalent to each of these components can be found in any 3D graphics system, making the knowledge you’ll gain in these pages highly transferable.

The Scene: a Tiny Universe

The scene is a holder for everything we can see. You can think of it as a “tiny universe” in which all your 3D objects live. The three.js class we use to create a scene is simply called Scene. The constructor takes no parameters.

Creating a scene
    
import { Scene } from 'three';

const scene = new Scene();

  
The world space coordinate system, defined by the Scene

The scene defines a coordinate system called World Space, which is our main frame of reference when working with visible objects in three.js. World space is a 3D Cartesian coordinate system. We’ll explore what that means and how to use world space in more detail in 1.5: Transformations and Coordinate Systems.

The very center of the scene is the point $(0,0,0)$, also called the origin of the coordinate system. Whenever we create a new object and add it to our scene, it will be placed at the origin, and whenever we move it around, we do so within this coordinate system.

Objects added to the Scene live in the scene-graph,
a tree of visible objects

When we add objects to the scene, they are placed into the scene graph which is a tree structure with the scene at the top.

 

Elements on a HTML page also form a tree structure

This is similar to the way elements on a HTML page are structured, except that the HTML page is 2D while the scene graph is 3D.

The Camera: a Telescope pointed at the Tiny Universe

The tiny universe of the scene is a realm of pure mathematics. To view the scene, we need to open a window into this realm and convert into something that makes sense to our human eyes, and that’s where the camera comes in. There are several ways to convert the scene graphic into a human vision friendly format, using techniques called projections. The most important type of projection, for us, is perspective projection, which is designed to match the way our eyes see the world. To view the scene using perspective projection, we use the PerspectiveCamera. This type of camera is the 3D equivalent of a camera in the real world and uses many of the same concepts and terminology, such as the field of view and the aspect ratio. Unlike the Scene, the PerspectiveCamera constructor takes several parameters, which we’ll explain in detail below.

Creating a PerspectiveCamera
    
import { PerspectiveCamera } from 'three';

const fov = 35; // AKA Field of View
const aspect = container.clientWidth / container.clientHeight;
const near = 0.1; // the near clipping plane
const far = 100; // the far clipping plane

const camera = new PerspectiveCamera(fov, aspect, near, far);

  

Another important type of projection is orthographic projection, which we can access using the OrthographicCamera. You might be familiar with this type of projection if you have ever studied engineering diagrams or blueprints, and it’s useful for creating 2D scenes or user interfaces that overlay a 3D scene. In this book, we’ll use HTML to create user interfaces and three.js to create 3D scenes, so we’ll stick with the PerspectiveCamera for the most part.

The following example shows the difference between these two cameras. The left side shows the scene rendered with an OrthographicCamera (Press O) or a PerspectiveCamera (press P), while the right side of the view shows a zoomed-out overview of the camera:

The OrthographicCamera and PerspectiveCamera in action

The Renderer: An Artist of Extraordinary Talent and Speed

If the scene is a tiny universe, and the camera is a telescope pointed at that universe, then the renderer is an artist who looks through the telescope and draws what they see onto a <canvas>, incredibly fast. We call this process rendering, and the resulting picture is a render. In this book, we will exclusively use the WebGLRenderer which renders our scenes using WebGL2, if it’s available, and falls back to WebGL V1 if it’s not. The constructor for the renderer does take several parameters, however, if we leave them out default values will be used, which is fine for now.

Creating a renderer with default parameters
    
import { WebGLRenderer } from 'three';

const renderer = new WebGLRenderer();

  

Together, the scene, camera, and renderer give us the basic scaffolding of a three.js application. However, none of them can be seen. In this chapter, we’ll introduce a type of visible object called a mesh.

Our First Visible Object: Mesh

A mesh contains of a geometry and a material

Meshes are the most common kind of visible object used in 3D computer graphics, and are used to display all kinds of 3D objects - cats and dogs and humans and trees and buildings and flowers and mountains can all be represented using a mesh. There are other kinds of visible objects, such as lines, and shapes, and sprites, and particles, and so on, and we’ll see all of them in later sections, but we’ll stick with meshes throughout these introductory chapters.

Creating a mesh
    
import { Mesh } from 'three';

const mesh = new Mesh(geometry, material);

  

As you can see, the Mesh constructor takes two parameters: a geometry and a material. We will need to create both of these before can create the mesh.

The Geometry

The geometry defines the shape of the mesh. We’ll use a kind of geometry called a BufferGeometry. In this case, we want a box shape, so we’ll use a BoxBufferGeometry, which is one of several basic shapes provided in the three.js core.

Creating a 2x2x2 box shaped geometry
    
import { BoxBufferGeometry } from 'three';

const length = 2;
const width = 2;
const depth = 2;

const geometry = new BoxBufferGeometry(length, width, depth);

  

The constructor takes up to six parameters, but here, we provide only the first three, which specify the length, width, and depth of the box. Defaults are provided for any parameters we omit. You can play with all six parameters in the scene below.

The BoxBufferGeometry in action

The Material

While the geometry defines the shape, the material defines how the surface of the mesh looks. We’ll use the MeshBasicMaterial in this chapter, which is the simplest kind of material available, and more importantly, doesn’t require us to add any lights to the scene. For now, we will omit all parameters which means a default white material will be created.

Creating a basic material
    
import { MeshBasicMaterial } from 'three';

const material = new MeshBasicMaterial();

  

Many of the parameters are available for testing here. The Material menu has parameters that are common to all three.js materials, while the MeshBasicMaterial menu has parameters that belong to just this material.

The MeshBasicMaterial in action

Our First three.js App

Now we are ready to write some code! We’ve introduced all the components that will make up our simple app, so the next step is to figure out how they all fit together. We’ll break process this into six steps. Every three.js app you create will require all six of these steps, although more complex apps will often require many more.

  1. Initial Setup
  2. Create the Scene
  3. Create the Camera
  4. Create a Visible Object
  5. Create the Renderer
  6. Render the Scene

1. Initial Setup

An important part of the initial setup is creating some kind of web page to host our scene, which we covered in the last chapter. Here, we’ll focus exclusively on the JavaScript we need to write. First, we’ll import the necessary classes from three.js, and then we’ll obtain a reference to the scene-container element from the index.html file.

Import Classes from three.js

Rounding up all the components we’ve introduced so far, we can see that we need these classes:

  • BoxBufferGeometry
  • Mesh
  • MeshBasicMaterial
  • PerspectiveCamera
  • Scene
  • WebGLRenderer

We’ll also use the Color class to set the scene’s background color:

  • Color

We can import everything we need from the three.js core using a single import statement.

main.js: importing the required three.js classes, NPM style
    import {
  BoxBufferGeometry,
  Color,
  Mesh,
  MeshBasicMaterial,
  PerspectiveCamera,
  Scene,
  WebGLRenderer,
} from 'three';

  

If you’re working locally (and not using a bundler like Webpack), you’ll have to change the import path. For example, you can import from unpkg.com instead.

main.js: importing the required three.js classes from a CDN
    

import {
  BoxBufferGeometry,
  Color,
  Mesh,
  MeshBasicMaterial,
  PerspectiveCamera,
  Scene,
  WebGLRenderer,
} from 'https://unpkg.com/[email protected]/build/three.module.js';



  

Refer back to 0.5: How to Include three.js in Your Projects if you need a reminder on how importing three.js classes works, or jump over to A.4: JavaScript Modules if you want a refresher on JavaScript modules.

Access the HTML scene-container Element in JavaScript

Over in index.html, we created a scene-container element.

index.html: the container element
    <body>
  <h1>Discoverthreejs.com - Your First Scene</h1>

  <div id="scene-container">
    <!-- Our <canvas> will be inserted here -->
  </div>
</body>

  

The renderer will automatically create a <canvas> element for us, which we’ll insert inside this container. By doing this, we can control the size and position of our scene by using CSS to set the size of the container (as we described in the last chapter). First though, we need to access the container element in JavaScript, which we’ll do using document.querySelector.

main.js: get a reference to the scene container
    // Get a reference to the container element that will hold our scene
const container = document.querySelector('#scene-container');

  

2. Create the Scene

With the setup out of the way, we’ll start by creating the scene, our very own tiny universe. We’ll use the Scene constructor (with an uppercase “S”) to create a scene instance (with a lowercase “s”):

main.js: create the scene
    // create a Scene
const scene = new Scene();

  

Set the Scene’s Background Color

Next, we’ll change the color of the scene’s background to sky blue. If we don’t do this, the default color will be used, which is black. We’ll use the Color class that we imported above, passing the string 'skyblue' as a parameter to the constructor:

main.js: set the scene’s background color
    // Set the background color
scene.background = new Color('skyblue');

  

'skyblue' is a CSS color name, and we can use any of the CSS colors here, giving us 140 named colors. You’re not limited to just these few colors, of course. You can use any color your monitor can display, and there are several ways of specifying them, just as there are in CSS.

3. Create The Camera

There are a couple of different cameras available in the three.js core, but as we discussed above, we will mostly use the PerspectiveCamera since it draws a view of the scene that looks similar to how our eyes see the real world. The PerspectiveCamera constructor takes four parameters:

  1. fov, or field of view: how wide the camera’s view is, in degrees.
  2. aspect, or aspect ratio: the ratio of the scene’s width to its height.
  3. near, or near clipping plane: anything closer to the camera than this will be invisible.
  4. far, or far clipping plane: anything further away from the camera than this will be invisible.
main.js
    // Create a camera
const fov = 35; // AKA Field of View
const aspect = container.clientWidth / container.clientHeight;
const near = 0.1; // the near clipping plane
const far = 100; // the far clipping plane

const camera = new PerspectiveCamera(fov, aspect, near, far);

  

Together, these four parameters are used to create a bounded region of space which we call a viewing frustum.

The Camera’s Viewing Frustum

A frustum

If the scene is a tiny universe, stretching forever in all directions, the camera’s viewing frustum is the part of it that we can see. A frustum is a mathematical term meaning a four-sided rectangular pyramid with the top cut off. When we view the scene through a PerspectiveCamera, everything inside the frustum is visible, while everything outside it is not. In the following diagram, the area in between the Near Clipping Plane and the Far Clipping Plane is the camera’s viewing frustum.

The four parameters we pass into the PerspectiveCamera constructor each create one aspect of the frustum:

  1. The field of view defines the angle at which the frustum expands. A small field of view will create a narrow frustum, and a wide field of view will create a wide frustum.
  2. The aspect ratio matches the frustum to the scene container element. When we set this to the container’s width divided by its height, we ensure the rectangular base of the frustum can be expanded to fit perfectly into the container. If we get this value wrong the scene will look stretched and blurred.
  3. The near clipping Plane defines the small end of the frustum (the point closest to the camera).
  4. The far clipping Plane defines the large end of the frustum (the point furthest from the camera).

Any objects in your scene that are not inside the frustum won’t be drawn by the renderer. If an object is partly inside and partly outside the frustum, the parts outside will be chopped off (clipped).

Position the Camera

Every object we create is initially positioned at the center of our scene, the point $(0,0,0)$. This means our camera is currently positioned at $(0,0,0)$, and any objects we add to the scene will also be positioned at $(0,0,0)$, all jumbled together on top of each other. Placing the camera artistically is an important skill, however, for now, we’ll simply move it back (towards us) to give us an overview of the scene.

main.js: move the camera back on the Z-axis
    const camera = new PerspectiveCamera(fov, aspect, near, far);

// every object is initially created at ( 0, 0, 0 )
// move the camera back so we can view the scene
camera.position.set(0, 0, 10);

  

Setting the position of any object works the same way, whether it’s a camera, a mesh, a light, or anything else. We can set all three components of the position at once, as we’re doing here:

Set the X, Y, and Z axes together
    
camera.position.set(0, 0, 10);

  

Or, we can set the X, Y, and Z components individually:

Set the three axes individually
    
camera.position.x = 0;
camera.position.y = 0;
camera.position.z = 10;

  

Both ways of setting the position give the same result. The position is stored in a Vector3, a three.js class representing a 3D vector which we’ll explore in more detail in 1.5: Transformations and Coordinate Systems.

4. Create a Visible Object

We’ve created a camera to see things with, and a scene to put them in. The next step is to create something we can see. Here, we’ll create a simple box-shaped Mesh. As we mentioned above, the mesh has two sub-components which we need to create first: a geometry and a material.

Create a Geometry

The geometry of a mesh defines its shape. If we create a box-shaped geometry (as we do here), our mesh will be shaped like a box. If we create a sphere-shaped geometry, our mesh will be shaped like a sphere. If we create a cat-shaped geometry, our mesh will be shaped like a cat… you get the picture. Here, we create a cube using a BoxBufferGeometry. The three parameters define the width, height, and depth of the box:

main.js: create a box geometry
    // create a geometry
const geometry = new BoxBufferGeometry(2, 2, 2);

  

Most parameters have default values, so even though the docs say that BoxBufferGeometry should take six parameters, we can leave out most of them and three.js will fill in the blanks with the default values. We don’t have to pass in any parameters.

Creating a default box geometry
    
const geometry = new BoxBufferGeometry();

  

If we leave out all the parameters, we’ll get a default box which is a $1 \times 1 \times 1$ cube. We want a bigger cube, so we’re passing in the above parameters to create a $2 \times 2 \times 2$ box.

Create a Material

Materials define the surface properties of objects, or in other words, what an object looks like it is made from. Where the geometry tells us that the mesh is a box, or a car, or a cat, the material tells us that it’s a metal box, or a stone car, or a red-painted cat.

There are quite a few materials in three.js. Here, we’ll create a MeshBasicMaterial, which is the simplest (and fastest) material type available. This material also ignores any lights in the scene and colors (shades) a mesh based on the material’s color and other settings which is great since we haven’t added any lights yet. We’ll create the material without passing any parameters into the constructor, so we’ll get a default white material.

main.js: create a default material
    // create a default (white) Basic material
const material = new MeshBasicMaterial();

  

Create the Mesh

Now that we have a geometry and a material, we can create our mesh, passing in both as parameters.

main.js: create the mesh
    // create a geometry
const geometry = new BoxBufferGeometry(2, 2, 2);

// create a default (white) Basic material
const material = new MeshBasicMaterial();

// create a Mesh containing the geometry and material
const cube = new Mesh(geometry, material);

  

Later, we can access the geometry and material at any time using mesh.geometry and mesh.material.

Add the Mesh to the Scene

Once the mesh has been created, we need to add it to our scene.

main.js: add the mesh to the scene
    // add the mesh to the scene
scene.add(cube);

  

Later, if we want to remove it, we can use scene.remove(mesh). Once the mesh has been added to the scene, we call the mesh a child of the scene, and we call the scene the parent of the mesh.

5. Create the Renderer

The final component of our simple app is the renderer, which is responsible for drawing (rendering) the scene into the <canvas> element. We’ll use the WebGLRenderer here. There are some other renderers available as plugins, but the WebGLRenderer is by far the most powerful renderer available, and usually the only one you need. Let’s go ahead and create a WebGLRenderer now, once again with default settings.

main.js: create the renderer
    // create the renderer
const renderer = new WebGLRenderer();

  

Set the Renderer’s Size

We are nearly there! Next, we need to tell renderer what size our scene is using the container’s width and height.

main.js: set the renderer’s size
    // next, set the renderer to the same size as our container element
renderer.setSize(container.clientWidth, container.clientHeight);

  

If you recall, we used CSS to make the container take up the full size of the browser window (as described in the last chapter), so the scene will also take up the full window.

Set The Device Pixel Ratio

We also need to tell the renderer what the pixel ratio of the device’s screen is. This is required to prevent blurring on HiDPI displays (also known as retina displays).

main.js: set the pixel ratio
    // finally, set the pixel ratio so that our scene will look good on HiDPI displays
renderer.setPixelRatio(window.devicePixelRatio);

  

We won’t get into the technicalities here, but you mustn’t forget to set this, otherwise your scene may look great on the laptop where you’re testing it, but blurry on mobile devices with retina displays. As always, the appendices have more details.

Add the <canvas> Element to Our Page

The renderer will draw our scene from the viewpoint of the camera into a <canvas> element. This element has been automatically created for us and is stored in renderer.domElement, but before we can see it, we need to add it to the page. We’ll do this using a built-in JavaScript method called .append:

main.js: add the canvas to the page
    // add the automatically created <canvas> element to the page
container.append(renderer.domElement);

  

Now, if you open up the browser’s development console (press F12) and inspect the HTML, you’ll see something like this:

index.html
    
<div id="scene-container">
  <canvas
    width="800"
    height="600"
    style="width: 800px; height: 600px;"
  ></canvas>
</div>

  

This assumes a browser window size of $800 \times 600$, so what you see may look slightly different. Notice that renderer.setSize has also set the width, height, and style attributes on the canvas.

6. Render the Scene

With everything in place, all that remains to do is render the scene! Add the following and final line to your code:

main.js: render the scene
    // render, or 'create a still image', of the scene
renderer.render(scene, camera);

  

With this single line, we’re telling the renderer to create a still picture of the scene using the camera and output that picture into the <canvas> element. If everything is set up correctly, you’ll see a white cube against a blue background. It’s hard to see that it’s a cube since we’re looking directly at a single square face, but we’ll fix that over the next few chapters.

Well done! By completing this chapter, you’ve taken the first giant leap in your career as a three.js developer. Our scene may not be that interesting yet, but we’ve laid some important groundwork and covered some fundamental concepts of computer graphics that you’ll use in every scene you build from now on, whether you are using three.js or any other 3D graphics system.

Import Style
Selected Texture