Your First three.js Scene: Hello, Cube!
In this chapter, we’ll create the Hello World of three.js apps: a simple white cube. Since we’ve already set up a simple webpage, as described in the last chapter, all we need to do is write a couple of lines of JavaScript in src/main.js and our app will spring to life. We’ll introduce quite a bit of theory along the way, but the actual code is short. Below is what this file will look like by the end of the chapter. Not counting the import statement and comments, there are under twenty lines of code in total. That’s all it takes to create a simple “Hello Cube!” three.js app.
import {
BoxBufferGeometry,
Color,
Mesh,
MeshBasicMaterial,
PerspectiveCamera,
Scene,
WebGLRenderer,
} from 'three';
// Get a reference to the container element that will hold our scene
const container = document.querySelector('#scene-container');
// create a Scene
const scene = new Scene();
// Set the background color
scene.background = new Color('skyblue');
// Create a camera
const fov = 35; // AKA Field of View
const aspect = container.clientWidth / container.clientHeight;
const near = 0.1; // the near clipping plane
const far = 100; // the far clipping plane
const camera = new PerspectiveCamera(fov, aspect, near, far);
// every object is initially created at ( 0, 0, 0 )
// move the camera back so we can view the scene
camera.position.set(0, 0, 10);
// create a geometry
const geometry = new BoxBufferGeometry(2, 2, 2);
// create a default (white) Basic material
const material = new MeshBasicMaterial();
// create a Mesh containing the geometry and material
const cube = new Mesh(geometry, material);
// add the mesh to the scene
scene.add(cube);
// create the renderer
const renderer = new WebGLRenderer();
// next, set the renderer to the same size as our container element
renderer.setSize(container.clientWidth, container.clientHeight);
// finally, set the pixel ratio so that our scene will look good on HiDPI displays
renderer.setPixelRatio(window.devicePixelRatio);
// add the automatically created <canvas> element to the page
container.append(renderer.domElement);
// render, or 'create a still image', of the scene
renderer.render(scene, camera);
Click the toggle on the top left of the editor to see this code in action, or, if you prefer to work locally, you can click the button to download a zip archive containing all the files from the editor. If any of the JavaScript here is unfamiliar to you, refer to the JavaScript reference and the DOM API refence in the appendices.
The Components of a Real-Time 3D App
Before we get started on the code, let’s look at the basic components that make up every three.js app. First, there’s the scene, camera, and renderer, which form the basic scaffolding of the application. Next, there’s the HTML
<canvas>
element, where we see the results. Last but not least, there’s a visible object such as a mesh. With the exception of the canvas (which is specific to the browser), an equivalent to each of these components can be found in any 3D graphics system, making the knowledge you’ll gain in these pages highly transferable.
The Scene: a Tiny Universe
The scene is a holder for everything we can see. You can think of it as a “tiny universe” in which all your 3D objects live. The three.js class we use to create a scene is simply called
Scene
. The constructor takes no parameters.
The scene
defines a coordinate system called World Space, which is our main frame of reference when working with visible objects in three.js. World space is a
3D Cartesian coordinate system. We’ll explore what that means and how to use world space in more detail in
the section on Coordinate Systems.
The very center of the scene is the point $(0,0,0)$, also called the origin of the coordinate system. Whenever we create a new object and add it to our scene, it will be placed at the origin, and whenever we move it around, we do so within this coordinate system.
When we add objects to the scene, they are placed into the scene graph which is a tree structure with the scene at the top.
 
This is similar to the way elements on a HTML page are structured, except that the HTML page is 2D while the scene graph is 3D.
The Camera: a Telescope pointed at the Tiny Universe
The tiny universe of the scene is a realm of pure mathematics. To view the scene, we need to open a window into this realm and convert into something that makes sense to our human eyes, and that’s where the camera comes in. There are several ways to convert the scene graphic into a human vision friendly format, using techniques called projections. The most important type of projection, for us, is perspective projection, which is designed to match the way our eyes see the world. To view the scene using perspective projection, we use the
PerspectiveCamera
. This type of camera is the 3D equivalent of a camera in the real world and uses many of the same concepts and terminology, such as the field of view and the aspect ratio. Unlike the Scene
, the PerspectiveCamera
constructor takes several parameters, which we’ll explain in detail below.
Another important type of projection is orthographic projection, which we can access using the
OrthographicCamera
. You might be familiar with this type of projection if you have ever studied engineering diagrams or blueprints, and it’s useful for creating 2D scenes or user interfaces that overlay a 3D scene. In this book, we’ll use HTML to create user interfaces and three.js to create 3D scenes, so we’ll stick with the PerspectiveCamera
for the most part.
The following example shows the difference between these two cameras. The left side shows the scene rendered with an OrthographicCamera
(Press O) or a PerspectiveCamera
(press P), while the right side of the view shows a zoomed-out overview of the camera:
The Renderer: An Artist of Extraordinary Talent and Speed
If the scene is a tiny universe, and the camera is a telescope pointed at that universe, then the renderer is an artist who looks through the telescope and draws what they see onto a <canvas>
, incredibly fast. We call this process rendering, and the resulting picture is a render. In this book, we will exclusively use the
WebGLRenderer
which renders our scenes using
WebGL2, if it’s available, and falls back to WebGL V1 if it’s not. The constructor for the renderer does take several parameters, however, if we leave them out default values will be used, which is fine for now.
Together, the scene, camera, and renderer give us the basic scaffolding of a three.js application. However, none of them can be seen. In this chapter, we’ll introduce a type of visible object called a mesh.
Our First Visible Object: Mesh
Meshes are the most common kind of visible object used in 3D computer graphics, and are used to display all kinds of 3D objects - cats and dogs and humans and trees and buildings and flowers and mountains can all be represented using a mesh. There are other kinds of visible objects, such as lines, and shapes, and sprites, and particles, and so on, and we’ll see all of them in later sections, but we’ll stick with meshes throughout these introductory chapters.
As you can see, the Mesh
constructor takes two parameters: a geometry and a material. We will need to create both of these before can create the mesh.
The Geometry
The geometry defines the shape of the mesh. We’ll use a kind of geometry called a
BufferGeometry
. In this case, we want a box shape, so we’ll use a
BoxBufferGeometry
, which is one of several basic shapes provided in the three.js core.
The constructor takes up to six parameters, but here, we provide only the first three, which specify the length, width, and depth of the box. Defaults are provided for any parameters we omit. You can play with all six parameters in the scene below.
The Material
While the geometry defines the shape, the material defines how the surface of the mesh looks. We’ll use the
MeshBasicMaterial
in this chapter, which is the simplest kind of material available, and more importantly, doesn’t require us to add any lights to the scene. For now, we will omit all parameters which means a default white material will be created.
Many of the parameters are available for testing here. The Material menu has parameters that are common to all three.js materials, while the MeshBasicMaterial menu has parameters that belong to just this material.
Our First three.js App
Now we are ready to write some code! We’ve introduced all the components that will make up our simple app, so the next step is to figure out how they all fit together. We’ll break this process down into six steps. Every three.js app you create will require all six of these steps, although more complex apps will often require many more.
- Initial Setup
- Create the Scene
- Create the Camera
- Create a Visible Object
- Create the Renderer
- Render the Scene
1. Initial Setup
An important part of the initial setup is creating some kind of web page to host our scene, which we covered in the last chapter. Here, we’ll focus exclusively on the JavaScript we need to write. First, we’ll import the necessary classes from three.js, and then we’ll obtain a reference to the scene-container
element from the index.html file.
Import Classes from three.js
Rounding up all the components we’ve introduced so far, we can see that we need these classes:
BoxBufferGeometry
Mesh
MeshBasicMaterial
PerspectiveCamera
Scene
WebGLRenderer
We’ll also use the Color
class to set the scene’s background color:
Color
We can import everything we need from the three.js core using a single import
statement.
import {
BoxBufferGeometry,
Color,
Mesh,
MeshBasicMaterial,
PerspectiveCamera,
Scene,
WebGLRenderer,
} from 'three';
If you’re working locally (and not using a bundler like Webpack), you’ll have to change the import path. For example, you can import from skypack.dev instead.
import {
BoxBufferGeometry,
Color,
Mesh,
MeshBasicMaterial,
PerspectiveCamera,
Scene,
WebGLRenderer,
} from "https://cdn.skypack.dev/[email protected]";
Refer back to the intro if you need a reminder on how importing three.js classes works, or jump over to the appendices if you want a refresher on JavaScript modules.
Access the HTML scene-container
Element in JavaScript
Over in index.html, we created a scene-container
element.
<body>
<h1>Discoverthreejs.com - Your First Scene</h1>
<div id="scene-container">
<!-- Our <canvas> will be inserted here -->
</div>
</body>
The renderer will automatically create a <canvas>
element for us, which we’ll insert inside this container. By doing this, we can control the size and position of our scene by using CSS to set the size of the container (as we described in
the last chapter). First though, we need to access the container element in JavaScript, which we’ll do using
document.querySelector
.
// Get a reference to the container element that will hold our scene
const container = document.querySelector('#scene-container');
2. Create the Scene
With the setup out of the way, we’ll start by creating the scene, our very own tiny universe. We’ll use the
Scene
constructor (with an uppercase “S”) to create a scene
instance (with a lowercase “s”):
// create a Scene
const scene = new Scene();
Set the Scene’s Background Color
Next, we’ll change the color of
the scene’s background to sky blue. If we don’t do this, the default color will be used, which is black. We’ll use the Color
class that we imported above, passing the string 'skyblue'
as a parameter to the constructor:
// Set the background color
scene.background = new Color('skyblue');
'skyblue'
is a
CSS color name, and we can use any of the CSS colors here, giving us 140 named colors. You’re not limited to just these few colors, of course. You can use any color your monitor can display, and there are several ways of specifying them, just as there are in CSS.
3. Create The Camera
There are a couple of different cameras available in the three.js core, but as we discussed above, we will mostly use the
PerspectiveCamera
since it draws a view of the scene that looks similar to how our eyes see the real world. The PerspectiveCamera
constructor takes four parameters:
fov
, or field of view: how wide the camera’s view is, in degrees.aspect
, or aspect ratio: the ratio of the scene’s width to its height.near
, or near clipping plane: anything closer to the camera than this will be invisible.far
, or far clipping plane: anything further away from the camera than this will be invisible.
// Create a camera
const fov = 35; // AKA Field of View
const aspect = container.clientWidth / container.clientHeight;
const near = 0.1; // the near clipping plane
const far = 100; // the far clipping plane
const camera = new PerspectiveCamera(fov, aspect, near, far);
Together, these four parameters are used to create a bounded region of space which we call a viewing frustum.
The Camera’s Viewing Frustum
If the scene
is a tiny universe, stretching forever in all directions, the camera’s viewing frustum is the part of it that we can see. A frustum is a mathematical term meaning a four-sided rectangular pyramid with the top cut off. When we view the scene through a PerspectiveCamera
, everything inside the frustum is visible, while everything outside it is not. In the following diagram, the area in between the Near Clipping Plane and the Far Clipping Plane is the camera’s viewing frustum.
The four parameters we pass into the PerspectiveCamera
constructor each create one aspect of the frustum:
- The field of view defines the angle at which the frustum expands. A narrow field of view will create a narrow frustum, and a wide field of view will create a wide frustum.
- The aspect ratio matches the frustum to the scene container element. When we set this to the container’s width divided by its height, we ensure the rectangular base of the frustum can be expanded to fit perfectly into the container. If we get this value wrong the scene will look stretched and blurred.
- The near clipping Plane defines the small end of the frustum (the point closest to the camera).
- The far clipping Plane defines the large end of the frustum (the point furthest from the camera).
Any objects in your scene that are not inside the frustum won’t be drawn by the renderer. If an object is partly inside and partly outside the frustum, the parts outside will be chopped off (clipped).
Position the Camera
Every object we create is initially positioned at the center of our scene, the point $(0,0,0)$. This means our camera is currently positioned at $(0,0,0)$, and any objects we add to the scene will also be positioned at $(0,0,0)$, all jumbled together on top of each other. Placing the camera artistically is an important skill, however, for now, we’ll simply move it back (towards us) to give us an overview of the scene.
const camera = new PerspectiveCamera(fov, aspect, near, far);
// every object is initially created at ( 0, 0, 0 )
// move the camera back so we can view the scene
camera.position.set(0, 0, 10);
Setting the position of any object works the same way, whether it’s a camera, a mesh, a light, or anything else. We can set all three components of the position at once, as we’re doing here:
Or, we can set the X, Y, and Z components individually:
Both ways of setting the position give the same result. The position is stored in a
Vector3
, a three.js class representing a 3D vector which we’ll explore in more detail in
the chapter on transformations.
4. Create a Visible Object
We’ve created a camera
to see things with, and a scene
to put them in. The next step is to create something we can see. Here, we’ll create a simple box-shaped
Mesh
. As we mentioned above, the mesh has two sub-components which we need to create first: a geometry and a material.
Create a Geometry
The geometry of a mesh defines its shape. If we create a box-shaped geometry (as we do here), our mesh will be shaped like a box. If we create a sphere-shaped geometry, our mesh will be shaped like a sphere. If we create a cat-shaped geometry, our mesh will be shaped like a cat… you get the picture. Here, we create a cube using a
BoxBufferGeometry
. The three parameters define the width, height, and depth of the box:
// create a geometry
const geometry = new BoxBufferGeometry(2, 2, 2);
Most parameters have default values, so even though the docs say that BoxBufferGeometry
should take six parameters, we can leave out most of them and three.js will fill in the blanks with the default values. We don’t have to pass in any parameters.
If we leave out all the parameters, we’ll get a default box which is a $1 \times 1 \times 1$ cube. We want a bigger cube, so we’re passing in the above parameters to create a $2 \times 2 \times 2$ box.
Create a Material
Materials define the surface properties of objects, or in other words, what an object looks like it is made from. Where the geometry tells us that the mesh is a box, or a car, or a cat, the material tells us that it’s a metal box, or a stone car, or a red-painted cat.
There are quite a few materials in three.js. Here, we’ll create a
MeshBasicMaterial
, which is the simplest (and fastest) material type available. This material also ignores any lights in the scene and colors (shades) a mesh based on the material’s color and other settings which is great since we haven’t added any lights yet. We’ll create the material without passing any parameters into the constructor, so we’ll get a default white material.
// create a default (white) Basic material
const material = new MeshBasicMaterial();
Create the Mesh
Now that we have a geometry and a material, we can create our mesh, passing in both as parameters.
// create a geometry
const geometry = new BoxBufferGeometry(2, 2, 2);
// create a default (white) Basic material
const material = new MeshBasicMaterial();
// create a Mesh containing the geometry and material
const cube = new Mesh(geometry, material);
Later, we can access the geometry and material at any time using _mesh_.geometry
and _mesh_.material
.
Add the Mesh to the Scene
Once the mesh
has been created, we need to add it to our scene.
// add the mesh to the scene
scene.add(cube);
Later, if we want to remove it, we can use scene.remove(_mesh_)
. Once the mesh has been added to the scene, we call the mesh a child of the scene, and we call the scene the parent of the mesh.
5. Create the Renderer
The final component of our simple app is the renderer, which is responsible for drawing (rendering) the scene into the <canvas>
element. We’ll use the
WebGLRenderer
here. There are some other renderers available as plugins, but the WebGLRenderer
is by far the most powerful renderer available, and usually the only one you need. Let’s go ahead and create a WebGLRenderer
now, once again with default settings.
// create the renderer
const renderer = new WebGLRenderer();
Set the Renderer’s Size
We are nearly there! Next, we need to tell renderer what size our scene is using the container’s width and height.
// next, set the renderer to the same size as our container element
renderer.setSize(container.clientWidth, container.clientHeight);
If you recall, we used CSS to make the container take up the full size of the browser window (as described in the last chapter), so the scene will also take up the full window.
Set The Device Pixel Ratio
We also need to tell the renderer what the pixel ratio of the device’s screen is. This is required to prevent blurring on HiDPI displays (also known as retina displays).
// finally, set the pixel ratio so that our scene will look good on HiDPI displays
renderer.setPixelRatio(window.devicePixelRatio);
We won’t get into the technicalities here, but you mustn’t forget to set this, otherwise your scene may look great on the laptop where you’re testing it, but blurry on mobile devices with retina displays. As always, the appendices have more details.
Add the <canvas>
Element to Our Page
The renderer will draw our scene from the viewpoint of the camera into a <canvas>
element. This element has been automatically created for us and is stored in renderer.domElement
, but before we can see it, we need to add it to the page. We’ll do this using a
built-in JavaScript method called .append
:
// add the automatically created <canvas> element to the page
container.append(renderer.domElement);
Now, if you open up the browser’s development console (press F12) and inspect the HTML, you’ll see something like this:
This assumes a browser window size of $800 \times 600$, so what you see may look slightly different. Notice that renderer.setSize
has also set the width, height, and style attributes on the canvas.
6. Render the Scene
With everything in place, all that remains to do is render the scene! Add the following and final line to your code:
// render, or 'create a still image', of the scene
renderer.render(scene, camera);
With this single line, we’re telling the renderer to create a still picture of the scene using the camera and output that picture into the <canvas>
element. If everything is set up correctly, you’ll see a white cube against a blue background. It’s hard to see that it’s a cube since we’re looking directly at a single square face, but we’ll fix that over the next few chapters.
Well done! By completing this chapter, you’ve taken the first giant leap in your career as a three.js developer. Our scene may not be that interesting yet, but we’ve laid some important groundwork and covered some fundamental concepts of computer graphics that you’ll use in every scene you build from now on, whether you are using three.js or any other 3D graphics system.