OpenCL/OpenGL Interop Framework

Download CLGLDemo and CLMandelbrot sources

1. Introduction

Interoperation between OpenCL and OpenGL allows programmers to efficiently perform complex manipulation of data directly in the GPU memory. CMSoft brings to developers the new GLRender tool in OpenCLTemplate that automates the creation of an OpenGL scene coupled with a derived OpenCL context.

It is possible to create and display buffer objects in the OpenGL scene as well as manipulate them using OpenCL interoperation with very little effort using a pre-configured OpenCL environment. Two source codes are provided to demonstrate the framework capabilities: one that draws a Mandelbrot fractal set and another to show the capabilities of the 3D mouse in a CL/GL shared environment.

The OpenCL/GL interop tutorial demonstrates how to create the OpenCL context from an existing OpenGL window, as well as the commands to create OpenCL buffers from OpenGL. These tasks have been automated and this tutorial will focus on using OpenCLTemplate tools to easily create and manipulate CL/GL shared buffers.

CMSoft’s CLGL Interoperation with Textures Tutorial demonstrates how to share OpenGL Textures with OpenCL.

The following video demonstrates how to easily create a 3D interactive environment using Visual C# Express 2008 and OpenCLTemplate:

The following picture demonstrates the Mandelbrot fractal image obtained using OpenCLTemplate’s CLGL interoperation capabilities:

It may be helpful to take a look at the namespace OpenCLTemplate.CLGLInterop in the doxygen documentation. Look for GLRender and CLGLInteropFunctions.

Notice that the framework provided constitutes a general OpenGL environment useful even if there’s no intention to interoperate with OpenCL.

Note 1: If you are not accostumed to dealing with OpenGL (specially Vertex Buffer Objects), please scroll down to the Appendix: Understanding OpenGL VBOs.

Note 2: Sections 2, 3 and 4 deal with creation of OpenCLTemplate’s 3D environment and models. If you’re creating the OpenGL window manually and are just interested in the interoperation part please go to 5. Manipulating VBOs.

2. Creating the 3D environment

As demonstrated in the video, the 3D OpenGL screen is created within a Form object. It is important to have a parent form because OpenCLTemplate uses some of the form Events to handle its content. It is created using only one command:

//This command automatically calls CLCalc.InitCL() and initializes the interoperation
CLGLWindow = new GLRender(this, true, -1);

The parameters to the function are, in the order they appear:

Parent Form – Indicates which Form object will contain the OpenGL window.
Create CL/GL context – Set to true to create an associated OpenCL context and initialize OpenCL. You may create the OpenGL screen without OpenCL interoperation if you wish to.
Device number – 99% of the times setting this to -1 will solve the problem. The reason this has been included is that in some systems with multiple GPUs it may be necessary to manually specify which Device is being used to render the OpenGL scene (or else the OpenCL context will fail).

Notice that one single line of code creates the 3D interactive window.

As a final note, if a professional 3D system is available (like a Quadro card with NVidia 3D Vision supporting quad-buffered OpenGL), setting the StereoscopicDraw to true will automatically start rendering the system in stereoscopic 3D mode. The eye separation is then controlled by setting StereoDistance to the desired value.

3. Creating 3D models

OpenCLTemplate’s GLRender framework renders 3D models automatically and also handles mouse control of the scene. Lighting and blending are automatically controlled by the framework.

3.1 Creating surfaces from equations

One possible way to create 3D models is to use OpenGL/CL interop capabilities and create the models using mathematical equations. This is done as follows:

//Creates a plane from equation
plane = GLRender.GLVBOModel.CreateSurface(
    new float[] { -100, 100, N },
    new float[] { -100, 100, N },
    new string[] { "0.0f", "v", "u" },
    new string[] { "0.0f", "0.01f*fabs(v)", "0.01f*fabs(u)" },
    new string[] { "1.0f", "0.0f", "0.0f" });

The CreateSurface method takes as input parametric equations in the parameters u and v. In the above equation, N is the number of points both in u and v direction. In our case, we’re drawing a flat surface in the YZ plane:

plane = [0, v, u], with u = -100 … 100 using N points and v = -100 … 100 using N points. The second equations specify the vertex colors (remember they have to be in the range 0 to 1), and the third argument specifies the normal vector per pixel (in this case it is constant as we’re creating a plane).

3.2 Creating generic graphics

In order to create 3D models using OpenGL primitives, it is necessary to manually create the 3D models. If you are unfamiliar with primitive types check this list of OpenGL primitives:

As an example, let’s create a solid triangle using OpenCLTemplate’s GLRender.GLVBOModel structure:

//Manually creates a triangle
GLRender.GLVBOModel triangle = new GLRender.GLVBOModel(OpenTK.Graphics.OpenGL.BeginMode.Triangles);
float[] vertexes = new float[]
float[] normals = new float[]
   1.0f, 0.0f, 0.0f
float[] colors = new float[]
   1.0f, 0.0f, 0.0f, 1.0f,
   0.0f, 1.0f, 0.0f, 1.0f,
   0.0f, 0.0f, 1.0f, 1.0f
//Data for GL.Triangles primitive draw
int[] elements = new int[]
//Creates OpenGL buffers
//Adds model to models display List so it will be displayed in the OpenGL screen

This is the result:

If, instead, we wanted to create a wireframe triangle using GLLines primitive, we’d have to change the Elements vector and specify the different primitive:

//Manually creates a triangle
GLRender.GLVBOModel triangle = new GLRender.GLVBOModel(OpenTK.Graphics.OpenGL.BeginMode.Lines);
float[] vertexes = new float[]
float[] normals = new float[]
    1.0f, 0.0f, 0.0f
float[] colors = new float[]
    1.0f, 0.0f, 0.0f, 1.0f,
    0.0f, 1.0f, 0.0f, 1.0f,
    0.0f, 0.0f, 1.0f, 1.0f
//Data for GL.Triangles primitive draw
int[] elements = new int[]
//Creates OpenGL buffers
//Adds model to models display List so it will be displayed in the OpenGL screen

And the result is as follows:

3.3 Solid rotations and translations

It is possible to perform solid rotations and translations (and also scaling) in the objects by specifying rotation (in radians) and translation vectors which will set their drawing position. The effect of vetRot and vetTransl is obvious from what OpenGL commands are used:

/// <summary>Draws this model</summary>
public void DrawModel()
  if (!this.ShowModel) return;
  GL.Translate((float)vetTransl.x, (float)vetTransl.y, (float)vetTransl.z);
  GL.Rotate((float)vetRot.z * rad2deg, 0.0f, 0.0f, 1.0f);
  GL.Rotate((float)vetRot.y * rad2deg, 0.0f, 1.0f, 0.0f);
  GL.Rotate((float)vetRot.x * rad2deg, 1.0f, 0.0f, 0.0f);
  GL.Scale(Scale, Scale, Scale);
  // Draws Vertex Buffer Objects onto the screen

 Warning: in order to allow proper operation with 3D mouse it’s necessary to rotate/translate/scale the vertex coordinates themselves in the Device memory (modify the actual VBO).

4. Controlling the environment

4.1 Mouse rotation and translation

OpenCLTemplate’s GLRender framework automatically implements mouse rotation and translation which is the most common usage of the mouse to interact with 3D models in scientific applications. Mouse scroll is used for zoom and WASD keys can be used to navigate the model.

The OpenGL screen mouse mode can be set using the command:

CLGLWindow.MouseMode = GLRender.MouseMoveMode.<desired mode>;

The default setting is to use the mouse for rotation, but it is also possible to disable interaction, use mouse to pan and use the 3D mouse mode.

4.2 The 3D mouse

The 3D mouse was a concept we at CMSoft created to manipulate 3D models obtained for medical visualization. Keys + and – increase and decrease the radius of the 3D mouse, respectively, and this sets the configuration for the interaction distance of the 3D mouse.

There are two default tools: vertex erasing and vertex move. They work as follows:

  • Click with the left button and drag: GLRender will deform and drag vertexes of visible models in contact with the mouse sphere;
  • Click with the right button: GLRender will hide the vertexes of visible models in contact with the 3D mouse.

Of course, different implementations may be created such as 3D mouse object picking and 3D mouse drawing, and we look forward to receiving suggestions about new mouse modes. It is possible to define if a GLVBOModel 3D object will be shown by setting the DisplayModel property.

4.3 Showing/hiding axes

In scientific applications it is often convenient to display XYZ axes and these are enabled by default in GLRender environment. In order to disable axes display modify the DrawAxes property as shown below:

//This command automatically calls CLCalc.InitCL() and initializes the interoperation
CLGLWindow = new GLRender(this, true, -1);
//Stop drawing XYZ axes
CLGLWindow.DrawAxes = false;

4.4 Camera settings

It is possible to set the point where the camera is looking at (center) and the distance of the camera from the center. OpenGL users will immediately recall gluLookAt() function. The difference here is that the camera Eye is not set as the user controls it using the mouse rotation and the environment handles all necessary vector calculations.

//Sets camera to look to point [5,5,5]
CLGLWindow.SetCenter(new Vector(5, 5, 5));
//Sets camera distance from where it's looking to 10

5. Manipulating VBOs in GPU memory using OpenCL

This section discusses the OpenCLTemplate’s automation regarding OpenCL/GL interoperation itself. The framework automatically performs all necessary initializations.

The easiest way to interoperate is to create a GLRender.GLVBOModel 3D model and use the built-in commands to retrieve the CLCalc.Program.Variable variables but it is also possible to create the buffers from a generic OpenGL VBO.

In order to demonstrate the capabilities of the interoperation a program has been created to compute the Mandelbrot fractal. This is accomplished by manipulating the Colors VBO directly in the Device memory using CLGL interop and OpenCLTemplate.CLGLInterop.CLGLInteropFunctions class.

5.1 Creating CL variables from GL buffers

There are two ways to create OpenCL variables from GL buffers using OpenCLTemplate:

1. From a GLVBOModel object: create the object and retrieve the desired GL buffer using model.GetCL<Vertex/Normal/Color/Elem/TexCoord>Buffer as shown below:

//Creates a plane from equation
GLRender.GLVBOModel plane = GLRender.GLVBOModel.CreateSurface(
    new float[] { -100, 100, N },
    new float[] { -100, 100, N },
    new string[] { "0.0f", "v", "u" },
    new string[] { "0.0f", "0.01f*fabs(v)", "0.01f*fabs(u)"},
    new string[] { "1.0f", "0.0f", "0.0f" });
    //Retrieves OpenCL variable from OpenGL buffer
CLCalc.Program.Variable CLcolor = plane.GetCLColorBuffer();

Again, it is possible to retrieve any of the vertex coordinates/normals/texture coordinates/elements buffers from OpenCL just by using the appropriate Get method.

2. Directly from a OpenGL buffer object, using the appropriate CLCalc.Program.Variable constructor:

CLCalc.Program.Variable v = new CLCalc.Program.Variable(GLBuffer, sizeof(float));

In the line above, GLBuffer is a valid OpenGL buffer and the second argument is the size in bytes of each element of the buffer.

5.2 Acquiring and releasing VBOs

Prior to using CL variables created from GL buffers it is necessary to Acquire them into OpenCL as explained in the Interop Tutorial. It is also necessary to release the variable to OpenGL after manipulation. These operations are needed to ensure that OpenGL will not read corrupted data which is still being manipulated in the OpenCL context.

The following code shows how to acquire/release shared buffers in OpenCLTemplate. It’s ok to try to acquire non-shared buffers; OpenCLTemplate will just ignore the ones which don’t come from OpenGL.

//Creates kernel arguments
CLCalc.Program.Variable[] args = new CLCalc.Program.Variable[] { CLcolor, CLbounds };
//CLcolor variable comes from OpenGL. It is necessary to acquire
//Perform operation
kernelMandelbrot.Execute(args, new int[] { N, N });
//Release GL buffers to allow OpenGL drawing

The only difference regarding normal usage is the need to acquire the kernel arguments and release them afterwards.

5.3 Using OpenCL buffers created from VBOs

Aside from the Acquire/Release commands, usage of OpenCL buffers created from VBOs is no different than a regular OpenCL variable.

5.4 Read/write and recommendations

Having in mind that only one of OpenCL or OpenGL can use a VBO at a time, it’s usually a good idea to perform only simple operations using shared buffers or even consider precalculating and acquiring the buffers solely to copy data to them.

In other words, it might be a good idea to acquire the buffers to a non-shared variable, release the buffers, perform OpenCL computations and only then copy the results back.

6. Conclusion

We have demonstrated a powerful and simple OpenCLTemplate framework, GLRender, together with GLVBOModel 3D models holder and OpenCLTemplate.CLGLInteropFunctions that allows users to easily create an OpenGL window that has a shared OpenCL context.

Source code and a video are provided to demonstrate how simple it is to create a window with mouse interaction using OpenCLTemplate framework and another program shows how to use the CLGL interop to draw a Mandelbrot set.

Download CLGLDemo and CLMandelbrot sources

7. References

The OpenTK project,, feb-2011
The Neon Helium OpenGL tutorial,, feb-2011

Appendix: Understanding OpenGL Vertex Buffer Objects

A1. Basics: minimum information to draw 3D objects

Prior to introducing OpenGL basic commands, we’ll introduce the minimum information necessary to draw 3D elements onto the 2D screen (in our case, using OpenGL in C# via OpenTK). This knowledge is very basic and absolutely necessary in order to understand VBOs.

To draw 3D models using a rendering system, it is necessary to know the following information:

  • Spatial coordinate of the vertexes of the model;
  • Color of each vertex;
  • What vertexes are linked to each other and by what type of primitive.

In order to compute illumination and textures, one also needs to inform:

  • Vertex normals;
  • Texture coordinates.

A user who is very unfamiliar with the concepts above should spend some time studying OpenGL tutorials such as the classic NeHe OpenGL tutorial, Don’t bother to delve into the details of the commands though because many of them aren’t used anymore except for educational purposes.

A2. Brief history of OpenGL drawing commands

In the first OpenGL days, we used to draw objects passing vertexes, colors and normals 1 by 1. Take as an example NeHe’s lesson 07 commands to draw a cube:

// Front Face
glNormal3f( 0.0f, 0.0f, 1.0f);     // Normal Pointing Towards Viewer
glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f); // Point 1 (Front)
glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f); // Point 2 (Front)
glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f,  1.0f); // Point 3 (Front)
glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f,  1.0f); // Point 4 (Front)
// Back Face
glNormal3f( 0.0f, 0.0f,-1.0f);     // Normal Pointing Away From Viewer
glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f, -1.0f); // Point 1 (Back)
glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f); // Point 2 (Back)
glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f); // Point 3 (Back)
glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f, -1.0f); // Point 4 (Back)
// Top Face
glNormal3f( 0.0f, 1.0f, 0.0f);     // Normal Pointing Up
glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f); // Point 1 (Top)
glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f,  1.0f,  1.0f); // Point 2 (Top)
glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f,  1.0f,  1.0f); // Point 3 (Top)
glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f); // Point 4 (Top)
// Bottom Face
glNormal3f( 0.0f,-1.0f, 0.0f);     // Normal Pointing Down
glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f, -1.0f, -1.0f); // Point 1 (Bottom)
glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f, -1.0f, -1.0f); // Point 2 (Bottom)
glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f); // Point 3 (Bottom)
glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f); // Point 4 (Bottom)
// Right face
glNormal3f( 1.0f, 0.0f, 0.0f);     // Normal Pointing Right
glTexCoord2f(1.0f, 0.0f); glVertex3f( 1.0f, -1.0f, -1.0f); // Point 1 (Right)
glTexCoord2f(1.0f, 1.0f); glVertex3f( 1.0f,  1.0f, -1.0f); // Point 2 (Right)
glTexCoord2f(0.0f, 1.0f); glVertex3f( 1.0f,  1.0f,  1.0f); // Point 3 (Right)
glTexCoord2f(0.0f, 0.0f); glVertex3f( 1.0f, -1.0f,  1.0f); // Point 4 (Right)
// Left Face
glNormal3f(-1.0f, 0.0f, 0.0f);     // Normal Pointing Left
glTexCoord2f(0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, -1.0f); // Point 1 (Left)
glTexCoord2f(1.0f, 0.0f); glVertex3f(-1.0f, -1.0f,  1.0f); // Point 2 (Left)
glTexCoord2f(1.0f, 1.0f); glVertex3f(-1.0f,  1.0f,  1.0f); // Point 3 (Left)
glTexCoord2f(0.0f, 1.0f); glVertex3f(-1.0f,  1.0f, -1.0f); // Point 4 (Left)

glBegin specifies how the vertexes should be linked together. A comprehensive list of OpenGL primitives can be found at Nowadays the primitive type is embedded in the OpenGL draw command used to render the vertex buffer objects, as will be shown in section A5 below.

This structure, which sends each information individually, seemed to be intuitive in the early days of GPU programming. That was in part because of the great resemblance to the way we ourselves draw geometric objects by linking lines together. Each glVertex3f command specifies a vertex coordinate. The particular vertex will use the current normal and texture coordinate. This still works in almost every OpenGL implementation (doesn’t work in iPhone and iPads OpenGL as far as I know).

The problem with this approach is the enormous number of API calls necessary to draw complex objects. For instance, consider the task of drawing a colored sphere. If we use a NxN grid we need N² API calls to input vertex normals, N² calls to input colors and N² calls to specify the vertex spatial coordinates. This is too much, specially considering that nowadays it’s not unusual for models to have millions of vertexes.

This is how buffer objects were born: to greatly reduce the number of API calls in a more compact and comprehensive form. As an example, currently, the vertex coordinates of the cube would be specified in a single array:

float[] vertexCoords = new float[]
    -1.0f, -1.0f, -1.0f,
    -1.0f, -1.0f,  1.0f,
    -1.0f,  1.0f, -1.0f,
    -1.0f,  1.0f,  1.0f,
    1.0f, -1.0f, -1.0f,
    1.0f, -1.0f,  1.0f,
    1.0f,  1.0f, -1.0f,
    1.0f,  1.0f,  1.0f,

A3. The vertex coordinates, normals and color VBOs

As we’ve seen, VBOs constitute a more efficient way to pass information to the GPU device. It is important, though, to pay attention to the data structure.

In a 3D world, each vertex has 3 coordinates which have to be informed by using 3 floats. Normal vectors also have 3 components each and play an important role when the OpenGL pipeline computes lighting. Colors, on the other hand, usually have 4 components: red, green, blue and alpha (the transparency). Other structures will not be covered in this OpenGL VBO Overview.

This means that if a 3D model has N vertexes, the vertex coordinates and normals buffer will have 3N elements, whereas the color VBO will have 4N elements and the texture coordinates will have 2N. Feel free to browse the repository and inspect the details of how a GLRender.GLVBOModel is created.

A4. The Elements VBO

Now that the GPU knows the vertex coordinates, colors and normals, it’s still necessary to inform what vertexes should be connected to each other. The primitive being used tells how many vertexes are needed per element.

For instance, a GL.Triangles mode created as follows:

int[] elem = new int[]

will create 2 triangles: one using vertexes 0, 2 and 3 and the other using vertexes 6, 7 and 10. A wireframe representation of these objects using GL.Lines would be achieved by using the following elements:

int[] elem = new int[]
    7, 10,
    10, 6

Again, we recommend studying OpenTK and NeHe’s guides for more detailed information.

A5. Using OpenGL to render VBOs

OpenGL uses EnableClientStates commands to render VBOs, and there are quite a few commands necessary to perform the operation. Once again, feel free to browse the repository and inspect the details of how the buffer objects of a GLRender.GLVBOModel are created.

As a final remark, let us point that there are many other ways to render objects using OpenGL and this Appendix is nowhere close to a complete reference and professional developers are highly encouraged to refer to the OpenGL official programming guide (popularly known as the OpenGL Red Book).

Appendix B: Video transcript

Hi. This is Douglas from CMSoft and I am going to show you how to use the new GLRender framework embedded in OpenCLTemplate in order to create an OpenCL OpenGL interoperation screen.

We’ll start from scratch – from the initial screen of Visual C#. Create a new project… New project… I’ll call it DemoCLGL. Now I’m going to include some references to OpenCLTemplate and the other DLLs necessary for the drawings and the interoperation.

Go to references… first I need to save the project. Now I go to references, add these references to the project… Ok.

Now I’m going to create a shared OpenCL OpenGL screen. I’m going to use OpenCLTemplate’s CLGLInterop framework. Now I’m going to create a new shared screen. I’ll leave in the Form space in order to be able to access it afterwards. And, in the Load event, I’m going to initialize the OpenGL window. New -> I’ll use this form as the parent form and I am going to create the shared environment. I’m going to use -1 for default.

Compiling the code… as you can see, with a code that has just a single line I was able to create a 3D interactive environment. Let’s add some code here now. I’ll add a surface. I’m going to create the object using a 3D object created inside the GLRender framework. I’m going to create a new vertex buffer object model that is automatically rendered in the shared window.

I’m going to create the surface using equations. I’m going to parameterize the surface using uv coordinates and these parameters are as follows:

This is the minimum value of u, maximum value of u, number of points in the u direction. V coordinates. I’m going to use it from -10 to 10 too and also 500 points. Now the equations for vertex coordinates. I’m going to create a surface – a saddle surface, which is u times v. The equations are parameterized as u*v. I’m going to give the colors, the desired color equations, remembering that they need to be scaled from zero to 1. The components are in red, green, blue order. Now I’m going to provide the equations for the normal vectors. You don’t need to normalize these – the normalization is done in the creator. That’s it, I’ve just created a new surface. Now I’m going to show the surface using the interoperation window. Add the model – ok, let me correct the 3 equations, it’s necessary to be 3 equations, not a single string here.

Now I’m going to execute the program. And we see that with 3 lines of code this is what’s possible to create – 3D surface representing a mathematical function. Finally, I’m going to add the 3D mouse functionality. I’ll go to the desing and add a toolstrip and two buttons. In this first one I’ll set the mouse mode to rotatemodel, which is the one we’re using until now and in the other button I’m going to configure to mouse 3D mode. I’m going to execute the program again. I have an interactive window – now the 3D mouse is here. I use the minus key to reduce radius of the 3D mouse and by clicking and dragging I can modify the surface. There. I can also cut the surface using the 3D mouse. And this was all created with 1, 2, 3, 4, 5, 6 lines of code. Thank you very much for your attention.

Download CLGLDemo and CLMandelbrot sources

2 thoughts on “OpenCL/OpenGL Interop Framework”

  1. Nice presentation.

    Do you think it is possible to enable a third interop with a physics engine like bullet physics, so that you don’t need to marshal between managed and unmanaged code in two directions for the calculation of your 3D objects as long as they are moving according to the physics that you defined up front? In that case, c# could push only new external input (or script-like behaviors) into the three-way interop (I.e., a mouse click bumping a box or a timed impulse force) — realtively small data to deal with. So c# would macromanage the world instead if micromanaging it? Wouldn’t that give you a c# physics app with performance somewhere closer to native c++?

    How much of a stretch would that be?

    1. Your opencl or any vertex shader can easily become a physics simulator.

      Simple easy method is to start with code for boids or for euler/verlet integration, or a union of both. Both are itterative and easily paralelized.

      See youtube Video called:
      coding math verlet integration.

      And you have a basis for ragdoll physics for a vertex shader or opencl . Then just calculate collisions for each frame and you have some nice physics engine. The Video uses java. I realized it in second-life-script and eben in its unaccelerated mono-compiled version it runs fine

Leave a Reply

Your email address will not be published. Required fields are marked *