时隔将近一年了,再次拾起了还未学完的OpenGL,这一年工作生活学习状态都不好,希望这一次能重新好好的把OpenGL学好,把图形渲染的基础知识掌握好 – 游戏梦

参考书籍:
《OpenGL Programming Guide 8th Edition》 – Addison Wesley
《Fundamentals of Computer Graphics (3rd Edition)》 – Peter Shirley, Steve Marschnner
《Real-Time Rendering, Third Edition》 – Tomas Akenine-Moller, Eric Haines, Naty Hoffman

旧博客地址

渲染相关概念学习

OpenGL

Introdction to OpenGL

What is OpenGL?

  1. OpenGL is an application programming interface – “API” for short – which is merely a software library for accessing features in graphics hardware.(访问图形硬件设备功能的API)
  2. OpenGL is a “C” language library(OpenGL是一个C语言库)

History

It was first developed at Silicon Graphics Computer Systems with Version 1,0 released in July of 1994(wiki)

Next Generation OpenGL

Vulkan

OpenGL relative knowledge

OpenGL render pipeline

OpenGL Render Pipeline

  1. Vertex Data
    Sending Data to OpenGL

  2. Vertex Shader
    Process the data associated with that vertex

  3. Tessellation Shader
    Tessellation uses patchs to describe an object’s shape, and allows relatively simple collections of patch geometry to be tessellated to increase the number of geometric primitives providing better-looking models (eg: LOD)

  4. Geometry Shader
    Allows additional processing of individual geometric primitives, including creating new ones, before rasterization

  5. Primitive Assembly
    Organizes the vertices into their associated geometric primitives in preparation for clipping and rasterization

  6. Clipping
    Clip the vertex and pixels are outside of the viewport – this operation is handled automatically by OpenGL

  7. Rasterization
    Fragment generation. Pixels have a home in the framebuffer, while a fragment still can be rejected and never update its associated pixel location.

  8. Fragment Shading
    Use a fragment shading to determine the fragment’s final color, and potentially its depth value

  9. Per-Fragment Operations
    If a fragment successfully makes it through all of the enabled tests (eg: depth testing, stencil testing), it may be written directly to the framebuffer, updating the color of its pixel, or if blending is enabled, the fragment’s color will be combined with the pixel’s current color to generate a new color that is written into the framebuffer

Note:
Fragment’s visibility is determined using depth testing and stencil testing
Pixel data is usually stored in texture map for use with texture mapping, which allows any texture stage to look up data values from one or more texture maps.

OpenGL Shader Language (GLSL)

GLSL - OpenGL Shading Language 也称作 GLslang,是一个以C语言为基础的高阶着色语言。它是由 OpenGL ARB 所建立,提供开发者对绘图管线更多的直接控制,而无需使用汇编语言或硬件规格语言。

编译和执行
GLSL 着色器不是独立的应用程式;其需要使用 OpenGL API 的应用程式。C、C++、C#、Delphi 和 Java 皆支援 OpenGL API,且支援 OpenGL 着色语言。
GLSL 着色器本身只是简单的字串集,这些字串集会传送到硬件厂商的驱动程式,并从程式内部的 OpenGL API 进入点编译。着色器可从程式内部或读入纯文字档来即时建立,但必须以字串形式传送到驱动程式。

工具
GLSL 着色器可以事先建立和测试,现有以下 GLSL 开发工具:
RenderMonkey - 这个软件是由 ATI 制作的,提供界面用以建立、编译和除错 GLSL 着色器,和 DirectX 着色器一样。仅能在 Windows 平台上执行。
GLSLEditorSample - 在 Mac OS X 上,它是目前唯一可用的程式,其提供着色器的建立和编译,但不能除错。它是 cocoa 应用程式,仅能在 Mac OS X 上执行。
Lumina - Lumina 是新的 GLSL 开发工具。其使用 QT 界面,可以跨平台。

The color space in OpenGL

In OpenGL, colors are represented in what’s called the RGB color space

.obj and .mtl file format

参考文章:
obj文件基本结构及读取 - 计算机图形学
3D模型-OBJ材质文件 MTL格式分析
.mtl文件格式解析 - [建模]

“OBJ文件不包含面的颜色定义信息,不过可以引用材质库,材质库信息储存在一个后缀是”.mtl”的独立文件中。关键字”mtllib”即材质库的意思。 材质库中包含材质的漫射(diffuse),环境(ambient),光泽(specular)的RGB(红绿蓝)的定义值,以及反射(specularity),折射(refraction),透明度(transparency)等其它特征。 “usemtl”指定了材质之后,以后的面都是使用这一材质,直到遇到下一个”usemtl”来指定新的材质。”

.obj的一些基本内容格式描述:

‘#’ 这个就相当于C++代码里面的//,如果一行开始时#,那么就可以理解为这一行完全是注释,解析的时候可以无视

g 这个应该是geometry的缩写,代表一个网格,后面的是网格的名字。

v v是Vertex的缩写,很简单,代表一个顶点的局部坐标系中的坐标,可以有三个到四个分量。我之分析了三个分量,因为对于正常的三角形的网格来说,第四个分量是1,可以作为默认情况忽略。如果不是1,那可能这个顶点是自由曲面的参数顶点,这个我们这里就不分析了,因为大部分的程序都是用三角形的。

vn 这个是Vertex Normal,就是代表法线,这些向量都是单位的,我们可以默认为生成这个obj文件的软件帮我们做了单位化。

vt  这个是Vertex Texture Coordinate,就是纹理坐标了,一般是两个,当然也可能是一个或者三个,这里我之分析两个的情况。

mtllib <matFileName> 这个代表后面的名字是一个材质描述文件的名字,可以根据后面的名字去找相应的文件然后解析材质。

usemtl <matName> 这里是说应用名字为matName的材质,后面所有描述的面都是用这个材质,直到下一个usemtl。

f 这里就是face了,真正描述面的关键字。后面会跟一些索引。一般索引的数量是三个,也可能是四个(OpenGL里面可以直接渲染四边形,Dx的话只能分成两个三角形来渲染了)。每个索引数据中可能会有顶点索引,法线索引,纹理坐标索引,以/分隔。

.mtl文件(Material Library File)是材质库文件,描述的是物体的材质信息,ASCII存储,任何文本编辑器可以将其打开和编辑。一个.mtl文件可以包含一个或多个材质定义,对于每个材质都有其颜色,纹理和反射贴图的描述,应用于物体的表面和顶点。”
.mtl的一些基本内容格式描述:

以下是一个材质库文件的基本结构:
newmtl mymtl_1
材质颜色光照定义
纹理贴图定义
反射贴图定义
……

注释:每个材质库可含多个材质定义,每个材质都有一个材质名。用newmtl mtlName来定义一个材质。对于每个材质,可定义它的颜色光照纹理反射等描述特征。主要的定义格式如下文所示:

////////////////////////////////////////////////
材质颜色光照
1。环境反射有以下三种描述格式,三者是互斥的,不能同时使用。
Ka r g b ——用RGB颜色值来表示,g和b两参数是可选的,如果只指定了r的值,则g和b的值都等于r的值。三个参数一般取值范围为0.0~1.0,在此范围外的值则相应的增加或减少反射率;
Ka spectral file.rfl factor ——用一个rfl文件来表示。factor是一个可选参数,表示.rfl文件中值的乘数,默认为1.0;
Ka xyz x y z ——用CIEXYZ值来表示,x,y,z是CIEXYZ颜色空间的各分量值。y和z两参数是可选的,如果只指定了x的值,则y和z的值都等于r的值。三个参数一般取值范围为0~1。

2。漫反射描述的三种格式:
Kd r g b
Kd spectral file.rfl factor
Kd xyz x y z

3。镜反射描述的三种格式:
Ks r g b
Ks spectral file.rfl factor
Ks xyz x y z

4。滤光透射率描述的三种格式:
Tf r g b
Tf spectral file.rfl factor
Tf xyz x y z

5。光照模型描述格式:

illum illum_#
指定材质的光照模型。illum后面可接0~10范围内的数字参数。各个参数代表的光照模

从上面的内容可以看出,.obj是描述关于顶点,法线,面,纹理坐标和材质引用等相关的数据的集合,而.mtl是用于定义实际材质信息的文件。

了解了.obj和.mtl文件里面的内容描述方式,让我们来看看在实际中.obj和.mtl文件内容来学习理解一下,以下内容来源于Modern OpenGL Tutorials – 3D Picking 的spider.obj和spider.mtl```CPP
spider.obj

Wavefront OBJ exported by MilkShape 3D

mtllib spider.mtl

v 1.160379 4.512684 6.449167
…..

762 vertices

vt 0.186192 0.222718
…..

302 texture coordinates

vn -0.537588 -0.071798 0.840146
……

747 normals

g HLeib01
usemtl HLeibTex
s 1
f 1/1/1 2/2/2 3/3/3
……

80 triangles in group

……

1368 triangles total

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
从第二行mtllib spider.mtl可以看出spider.obj指定了纹理材质的描述文件是spider.mtl。
后续的v, vn, vt, f, g描述了所有的关于顶点,顶点法线,顶点纹理坐标,face, geometry相关的数据信息。
而紧跟着g后面usemtl Augentex则描述了该geometry所使用的材质名字是Augentex(这里的材质的具体信息在前面指定的spider.mtl文件中)。

```CPP
#
# spider.mtl
#

newmtl Skin
Ka 0.200000 0.200000 0.200000
Kd 0.827451 0.792157 0.772549
Ks 0.000000 0.000000 0.000000
Ns 0.000000
map_Kd .\wal67ar_small.jpg

newmtl Brusttex
Ka 0.200000 0.200000 0.200000
Kd 0.800000 0.800000 0.800000
Ks 0.000000 0.000000 0.000000
Ns 0.000000
map_Kd .\wal69ar_small.jpg

newmtl HLeibTex
Ka 0.200000 0.200000 0.200000
Kd 0.690196 0.639216 0.615686
Ks 0.000000 0.000000 0.000000
Ns 0.000000
map_Kd .\SpiderTex.jpg

newmtl BeinTex
Ka 0.200000 0.200000 0.200000
Kd 0.800000 0.800000 0.800000
Ks 0.000000 0.000000 0.000000
Ns 0.000000
map_Kd .\drkwood2.jpg

newmtl Augentex
Ka 0.200000 0.200000 0.200000
Kd 0.800000 0.800000 0.800000
Ks 0.000000 0.000000 0.000000
Ns 0.000000
map_Kd .\engineflare1.jpg

第五行newmtl Skin定义了一个材质的名字,后面Ka, Kd, Ks, Ns, map_Kd分别是对该材质的环境反射,漫反射,镜面反射,高光系数,纹理图片的配置。

综合上述的理解,可以看出,当我们通过assimp引用加载spider.obj这个文件的时候,我们会去使用spider.mtl作为材质配置文件去作为读取到的材质相关的信息,从而我们知道了我们需要spider.obj,spider.mtl,wal67ar_small.jpg,wal69ar_small.jpg,SpiderTex.jpg,drkwood2.jpg,engineflare1.jpg这些文件提供我们完整的mesh渲染相关的数据。

OpenGL learning journal

API

  1. 查看哪些错误标志位被设置
    了解:
    OpenGL在内部保留了一组错误标志位(共4个),其中每一个标志位代表一种不同类型的错误。当错误一个发生时,与这个错误对应的标志就会被设置。如果被设置的标志不止一个,glGetError仍然只返回一个唯一的值。当glGetError函数被调用时,这个值随后被清除,然后在glGetError再次被调用时将返回一个错误标志或GL_NO_ERROR为止
    函数:
    Glenum glGetError(void);

  2. 查询OpenGL的渲染引擎(OpenGL驱动程序)的生产商和版本号
    了解:
    OpenGL允许提供商通过它的扩展机制进行创新。为了使用特定供应商所提供的一些特定扩展功能,我们希望限制这个特定供应商所提供驱动程序的最低版本号。
    函数:
    const Glubyte *glGetString(GLenum name);

  3. 设置和查询管线的状态
    了解:
    OpenGL使用状态模型来跟踪所有的OpenGL状态变量来实现对OpenGL渲染状态的控制
    函数:
    void glEnable(GLenum capability);
    void glDisable(GLenum capability);
    void glGet*(Type)v(GLenum pname, GLboolean *params);

  4. 查询program的一些相关信息和一些错误信息
    了解:
    OpenGL的pragram链接可能由于GLSL里面的一些错误导致出错,我们需要知道关于program object的一些相关错误信息,同时我们也想知道我们现有的program相关的一些信息
    函数:
    void glGetProgramiv(GLuint program, GLenum pname, GLint *params);

  5. 得到shader链接出错的log信息
    了解:
    OpenGL的shader object可能链接失败,我们需要知道shader里面出错的信息
    函数:
    void glGetProgramInfoLog(GLuint program, GLsizei maxLength, GLsizei *length, GLchar *infoLog);

注意:
可以通过glGetProgramiv()去得到program的一些log相关信息,比如GL_INFO_LOG_LENGTH

OpenGL Knowledge:

  1. “OpenGL Execute Model:
    The model for interpretation of OpenGL commands is client-server. An application (the client) issues commands, which are interpreted and processed by OpenGL (the server). The server may or may not operate on the same computer as the client. In this sense, OpenGL is network-transparent. “

  2. “client-server 模式:
    OpenGL 是一种 client-server 模式,当你的应用程序调用 OpenGL 函数时, 它将告诉OpenGL client, 然后 client 将渲染命令传送给 server. 这里client 和 server可能是不同的计算机,或是同一台计算机上的不同进程。一般来说 server 是在 GPU上处理的, 而 client 是在 CPU 上处理的,这样分担了 CPU 的负担, 同时高效利用了GPU.”

但如果Client和Server没在同一个机器上,我们就需要一种一种网络传输协议框架来实现他们之间的交流:
X Window System

但X Window System里的client和server与传统的C/S模式相反,client是负责运算的,server是负责显示的。
但OpenGL的client和server的交流原理是与X Window System相似的

OpenGL Practice

Check supported OpenGL version

  1. Install the appropriate graphic driver which enables usage of the functionality provided.
    check the graphic drive update.(更新显卡驱动获得最新的OpenGL版本支持)
  2. Using OpenGL extensions viewer to check which OpenGL version is supported(查看当前硬件所支持的OpenGL版本)
    download website

OpenGL_Viewer_Info
从上图可以看出我当前的电脑和显卡驱动支持最高4.4,所以在使用学习OpenGL之前一定要确认好自己电脑所能支持的版本,避免后续不必要的问题。

检查完所支持的OpenGL版本后,下面我们需要介绍两个在学习OpenGL时为了帮助快速学习使用OpenGL的两个重要库(Glut & Glew)

Know what Glut and Glew are, and how to use them

  1. Glut (OpenGL Utility Toolkit)
    GLUT(英文全写:OpenGL Utility Toolkit)是一个处理OpenGL程式的工具库,负责处理和底层操作系统的呼叫以及I/O,并包括了以下常见的功能:

    1. 定义以及控制视窗
    2. 侦测并处理键盘及鼠标的事件
    3. 以一个函数呼叫绘制某些常用的立体图形,例如长方体、球、以及犹他茶壶(实心或只有骨架,如glutWireTeapot())
    4. 提供了简单选单列的实现

    GLUT是由Mark J. Kilgard在Silicon Graphics工作时所写,此人同时也是OpenGL Programming for the X Window System以及The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics两书的作者。

    GLUT的两个主要目的是建立一个跨平台的函式库(事实上GLUT就是跨平台的),以及简化学习OpenGL的条件。透过GLUT编写OpenGL通常只需要增加几行额外GLUT的程式码,而且不需要知道每个不同操作系统处理视窗的API。

    所有的GLUT函数都以glut作为开头,例如glutPostRedisplay()。

  2. Glew ( OpenGL Extension Wrangler Library)
    The OpenGL Extension Wrangler Library (GLEW) is a cross-platform C/C++ library that helps in querying and loading OpenGL extensions. GLEW provides efficient run-time mechanisms for determining which OpenGL extensions are supported on the target platform. All OpenGL extensions are exposed in a single header file, which is machine-generated from the official extension list.(Glew是一个支持跨平台的C/C++库,用于运行时鉴别OpenGL扩展所支持的版本)
    more info for extention tools

    How to use Glu & Glew?

    1. add glu.lib & glew.lib into additional dependencies
    2. add the directory that includes glu.h & glew.h into include dirctory
    3. Include GL/freeglut.h & GL/glew.h in source file
      Note:
      if you use static link, #define FREEGLUT_STATIC before you include GL/freeglut.h, otherwise it will look for freeglut.lib. #define GLEW_STATIC for Glew.

    include GL/glew.h before GL/freeglut.h, otherwise, it will through “fatal error C1189: #error : gl.h included before glew.h”

Note:
后续的学习都是基于Modern OpenGL Tutorials,后续提到的一些库的源码从该网站下载

Open a Window

IncludeFiles.h

1
2
3
4
5
6
7
8
#include <iostream>

using namespace std;

#define FREEGLUT_STATIC

//Glut part
#include <GL/freeglut.h>

OpenGLWindow.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#include "IncludeFiles.h"

static void RenderCallback()
{
glClear(GL_COLOR_BUFFER_BIT);
//Swap buffer
glutSwapBuffers();
}

static void InitializeGlutCallback()
{
//sets the display callback for the current window
glutDisplayFunc(&RenderCallback);
}

int main(int argc, char** argv)
{
//Initializes GLUT
glutInit(&argc, argv);

//GLUT_DOUBLE -- double buffer rendering
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);

//Initializa Windows info
glutInitWindowSize(1024, 768);
glutInitWindowPosition(100,100);
glutCreateWindow("OpenGLWindow");

InitializeGlutCallback();

//Clear framebuffer before new draw call
glClearColor(0.0f,0.0f,0.0f,0.0f);

//enable program to enter the window event loop
glutMainLoop();

return 0;
}

final result:
OpenGL_Window

从上面可以看出我们主要是通过调用glut来初始化创建windows窗口
通过glut里的API我们可以去设置回调,去实现我们在渲染时期需要设置的OpenGL状态
上述主要有四个重要的glut API:

  1. glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA) – GLUT_DOULBE开启了双buffer渲染,这样效率更高,一个buffer用于渲染,一个buffer用于填充下一帧数据
  2. glutDisplayFunc – 设置glut里的渲染回调
  3. glutMainLoop – 开启glut里的window event监听
  4. glutCreateWindow – 设定完相关参数后,通过此方法我们能够创建出我们想要的Windows窗口,同时OpenGL Context也在这时候被创建出来

Glut还提供了更多的关于Window的功能,后续会学习使用到

Using OpenGL

上一章节只是用到了glut去初始化我们最基本的Window窗口,还没真正大量用到OpenGL里API,在使用OpenGL API之前我们需要通过Glew这个工具去检查当前所支持的OpenGL版本,然后才能正确的调用对应的API。

使用Glew的准备工作在How to use Glu & Glew?时已经提到,这里不重述了

因为Glew需要通过context去查找对应所支持的OpenGL版本调用,所以初始化Glew必须在创建OpenGL Context之后。

Note:
Call glewInit after glutCreateWindow call successfully

那这里就不得不先了解一下什么是OpenGL Context了?
“”OpenGL Context””
OpenGL context, which is essentially a state machine that stores all data related to the rendering of your application. When your application closes, the OpenGL context is destroyed and everything is cleaned up.

结合wikiCreating an OpenGL Context (WGL)的介绍,这里我理解的不是很清晰,大概是OpenGL的Context相当于Device Context(DC)相对于Windows的概念一样。Context会设定很多跟渲染相关的状态(比如是否使用双buffer,depthbuffer占多少字节,颜色模式,窗口大小等渲染需要的信息)

这里我们只需要知道初始化Glut和调用glutCreateWindow创建窗口后,我们的OpenGL Context就生成了.

这也就是为什么在初始化glew之前必须先初始化Glut和创建Windows窗口的原因。

进一步了解参考:
Creating an OpenGL Context (WGL)

Using Glew
接下来回到Glew的使用去绘制我们的第一个OpenGL圆点
IncludeFiles.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#include <stdio.h>
#include <iostream>

using namespace std;

//Glew part
#define GLEW_STATIC
#include <GL/glew.h>

//Glut part
#define FREEGLUT_STATIC
#include <GL/freeglut.h>

#include "ogldev_math_3d.h"

UsingOpenGL.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
#include "IncludeFiles.h"

GLuint VBO;

static void RenderCallback()
{
glClear(GL_COLOR_BUFFER_BIT);

glEnableVertexAttribArray(0);

glBindBuffer(GL_ARRAY_BUFFER, VBO);
//Tells the how to interpret the data inside the buffer
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);

//Draw call
glDrawArrays(GL_POINTS, 0,1);

//Disable the vertex that is not used anymore after draw call
glDisableVertexAttribArray(0);

//Swap buffer
glutSwapBuffers();
}

static void InitializeGlutCallback()
{
//sets the display callback for the current window
glutDisplayFunc(&RenderCallback);
}

static void CreateVertexBuffer()
{
Vector3f vertices[1];
vertices[0] = Vector3f(0.0f,0.0f,0.0f);

/*
/ Apply a buffer handles
/ Bind buffer handle to specific buffer target
/ Filling the data for buffer target
*/
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
}

static void InitializeGlutAndWindow(int argc, char** argv, const char* windowsname)
{
//Initializes GLUT
glutInit(&argc, argv);

//GLUT_DOUBLE -- double buffer rendering
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);

//Initializa Windows info
glutInitWindowSize(1024, 768);
glutInitWindowPosition(100,100);

glutCreateWindow(windowsname);

InitializeGlutCallback();

//Initialize glew
GLenum res = glewInit();
if(res != GLEW_OK)
{
cout<<"Error: "<<glewGetErrorString(res)<<endl;
}

//Clear framebuffer before new draw call
glClearColor(0.0f,0.0f,0.0f,0.0f);

CreateVertexBuffer();

//enable program to enter the window event loop
glutMainLoop();
}

int main(int argc, char** argv)
{
InitializeGlutAndWindow(argc, argv, "UsingOpenGL");
return 0;
}

从上面可以看出,我们初始化glew只是调用了glewInit()方法,但主要一定要在OpenGL context创建完成后调用(即glutCreateWindow窗口创建之后)

我们创建并使用vertext buffer主要由5个步骤:

  1. glGenBuffers() – 创建一个可用的buffer obejct
  2. glBindBuffer() – 绑定buffer object到指定的target类型,target类型代表我们的buffer object包含什么样的数据用于什么样的用途
  3. glBufferData() – 填充buffer数据
  4. glVertexAttribPointer() – 指明如何去解析buffer里的数据,同时这里也指明了如何在shader里面访问这些数据(至于如何编写,编译,链接和使用Shader,后续会讲到。)
  5. glDrawArrays() – 调用draw指明如何使用并绘制buffer里面的数据

Note:
这里需要注意的一点,要想在Shader里访问buffer里面的attribute数据,我们需要在调用draw之前调用glEnableVertexAttribArray()来激活特定的attribute

final result:
OpenGL_Window

Using Shader

In the field of computer graphics, a shader is a computer program that is used to do shading: the production of appropriate levels of color within an image, or, in the modern era, also to produce special effects or do video post-processing.

上述是Wiki上Shader的定义。Shader是在可编程管线出现后,以程序的形式对渲染的各个阶段进行图形图像上的处理,使渲染变得更加灵活,主要作用于GPU上。

Shader作用于渲染的各个阶段:
之前在“Understand OpenGL render pipeline”有讲到各个渲染管线,这里就不再重述。可见Shader作用于大部分管线,比如:Vertex Shader(负责vertex数据处理),Tessellation Shader(负责以图形patch为单位的处理,用于描述物体形状数据,LOD就是在这个阶段实现的),Geometry Shader(以整个图形原件数据作为输入做处理,好比batch rendering可在这个阶段实现),Fragment Shading(以fragment(片元)数据作为输入做处理)

Shader Language在前面的“Understand OpenGL Shader Language”有讲到,这里就不重述了。

从上可见Shader在可编程管线的今天有着多么重要的作用。
接下来让我们看看在OpenGL中如何使用Shader吧。

使用Shader主要有下列几个步骤:

  1. Create a shader object – glCreateShader(GLenum type)(创建shader对象)
  2. Compile your shader source into the object – glShaderSource(******) glCompileShader(***)(编译shader文件,存储到shader对象中)
  3. Verify that your shader compiled successfully – glGetShaderInfoLog(***)(检查shader编译是否成功并获取错误信息)
  4. Create a shader program – glCreateProgram(void)(创建shader程序)
  5. Attach the appropriate shader objects to the shader program – glAttachShader(GLuint program, Gluint shader)(附加多个shader对象到shader程序中)
  6. Link the shader program – glLinkProgram(GLuint program)(链接shader程序)
  7. Verify that the shader link phase completed successfully – glGetProgramiv() & glGetProgramInfoLog(****)(检查shader程序链接是否成功并获取错误信息)
  8. Use the shader for vertex or fragment processing – glUseProgram(GLuint program)(使用shader程序做顶点处理或片元处理)

Shader出错后因为我们快速退出了程序,所以很难看到console的错误信息,所以最好的方式是把错误信息写入文本文件以供后续查看。Utils.h是关于编译和使用Shader并打印错误信息到文本的实现。
IncludeFiles.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#include <iostream>

#include <fstream>

#include <stdio.h>

using namespace std;

//Glew part
#define GLEW_STATIC
#include <GL/glew.h>

//Glut part
#define FREEGLUT_STATIC
#include <GL/freeglut.h>

#include "ogldev_util.h"
#include "ogldev_math_3d.h"

Utils.h

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
#include "IncludeFiles.h"

void serializationShaderCompileLog(GLuint prog, GLuint shader, GLenum type, char *log)
{
char *stage_name = new char[50];
char temp[50];
switch(type)
{
case 0x8B31:
sprintf(temp, "program%d-shader%d-%s", prog, shader, "GL_VERTEX_SHADER");
strcpy(stage_name,temp);
break;
case 0x8DD9:
sprintf(temp, "program%d-shader%d-%s", prog, shader, "GL_GEOMETRY_SHADER");
strcpy(stage_name,temp);

//strcpy(stage_name,"program:" + prog + "shader:" + shader + "stage:" + "GL_GEOMETRY_SHADER");
break;
case 0x8B30:
sprintf(temp, "program%d-shader%d-%s", prog, shader, "GL_FRAGMENT_SHADER");
strcpy(stage_name,temp);

//strcpy(stage_name,"program:" + prog + "shader:" + shader + "stage:" + "GL_FRAGMENT_SHADER");
break;
}
cout<<"program:"<<prog<<"-shader:"<<shader<<"-stage:"<<stage_name<<" compile log:"<<endl;
cout<<log<<endl;

char *file_name = new char[50];

strcpy(file_name,stage_name);

strcat(file_name,".txt");

ofstream write_to_file;
write_to_file.open(file_name,ios::out);

write_to_file<<stage_name;
write_to_file<<" compiled log info;\n";
write_to_file<<log;
write_to_file.close();

delete []stage_name;
delete []file_name;

stage_name = nullptr;
file_name = nullptr;
}


void program_log_serialization(unsigned int program,char const *program_name,bool is_console_print)
{
GLchar *program_linked_log = NULL;
GLint log_length = 0;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &log_length);
program_linked_log = new char[log_length];

GLsizei program_linked_log_real_length;
glGetProgramInfoLog(program, log_length, &program_linked_log_real_length, program_linked_log);
if(is_console_print)
{
cout<<program_name<<" linked log info:"<<endl;
cout<<program_linked_log<<endl;
}

const int file_name_length = strlen(program_name);

char *log_file_whole_name = new char[file_name_length + 10];

strcpy(log_file_whole_name,const_cast<char*>(program_name));

strcat(log_file_whole_name,".txt");

ofstream write_to_file;
write_to_file.open(log_file_whole_name,ios::out);

write_to_file<<*program_name + " linked log info;\n";
write_to_file<<program_linked_log;
write_to_file.close();

delete []program_linked_log;
delete []log_file_whole_name;

program_linked_log = nullptr;
log_file_whole_name = nullptr;
}

static void AddShader(GLuint shaderprogram, const char* pshadertext, GLenum shadertype)
{
GLuint shaderobj = glCreateShader(shadertype);

if(shaderobj == 0)
{
cout<<"Error create shader type "<<shadertype<<endl;
exit(0);
}

const GLchar *p[1];
p[0] = pshadertext;
GLint lengths[1];
lengths[0] = strlen(pshadertext);

glShaderSource(shaderobj, 1, p, lengths);
glCompileShader(shaderobj);

GLint success;
glGetShaderiv(shaderobj, GL_COMPILE_STATUS, &success);
if(!success)
{
GLchar infolog[1024];
glGetShaderInfoLog(shaderobj, 1024, NULL, infolog);
cout<<"Error compiling shader type "<<shadertype<<endl;

serializationShaderCompileLog(shaderprogram, shaderobj, shadertype, infolog);

exit(1);
}

glAttachShader(shaderprogram, shaderobj);
}

static void CompileShader(GLuint shaderprogram, const char* psfilename, GLenum shadertype)
{
if(shaderprogram == 0)
{
cout<<"Error creating shader program"<<endl;
exit(1);
}

string s;

if(!ReadFile(psfilename, s))
{
cout<<psfilename<<" is not exit"<<endl;
exit(1);
}

switch(shadertype)
{
case 0x8B31:
AddShader(shaderprogram, s.c_str(), GL_VERTEX_SHADER);
break;
case 0x8DD9:
AddShader(shaderprogram, s.c_str(), GL_GEOMETRY_SHADER);
break;
case 0x8B30:
AddShader(shaderprogram, s.c_str(), GL_FRAGMENT_SHADER);
break;
}
}

static void LinkAndUseShaderProgram(GLuint shaderprogram)
{
GLint success = 0;
GLchar errorlog[1024] = {0};

glLinkProgram(shaderprogram);

glGetProgramiv(shaderprogram, GL_LINK_STATUS, &success);

if(success == 0)
{
glGetProgramInfoLog(shaderprogram, sizeof(errorlog), NULL, errorlog);
cout<<"Error linking shader program "<<errorlog<<endl;
program_log_serialization(shaderprogram, "LinkStatus", true);
exit(1);
}

glValidateProgram(shaderprogram);
glGetProgramiv(shaderprogram, GL_VALIDATE_STATUS, &success);
if(!success)
{
glGetProgramInfoLog(shaderprogram, sizeof(errorlog), NULL, errorlog);
cout<<"Invalid shader program "<<errorlog<<endl;
exit(1);
}

glUseProgram(shaderprogram);
}

UsingShader.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
#include "IncludeFiles.h"

#include "Utils.h"

GLuint VBO;

GLuint ShaderProgram;

const char* pVSFileName = "vsshader.vs";

const char* pFSFileName = "fsshader.fs";

static void RenderCallback()
{
glClear(GL_COLOR_BUFFER_BIT);

glEnableVertexAttribArray(0);

glBindBuffer(GL_ARRAY_BUFFER, VBO);
//Tells the how to interpret the data inside the buffer
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);

//Draw call
glDrawArrays(GL_TRIANGLES, 0,3);

//Disable the vertex that is not used anymore after draw call
glDisableVertexAttribArray(0);

//Swap buffer
glutSwapBuffers();
}

static void InitializeGlutCallback()
{
//sets the display callback for the current window
glutDisplayFunc(&RenderCallback);
}

static void CreateVertexBuffer()
{
Vector3f vertices[3];
vertices[0] = Vector3f(-1.0f,-1.0f,0.0f);
vertices[1] = Vector3f(1.0f, -1.0f, 0.0f);
vertices[2] = Vector3f(0.0f, 1.0f, 0.0f);

/*
/ Apply a buffer handles
/ Bind buffer handle to specific buffer target
/ Filling the data for buffer target
*/
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
}

static void InitializeGlutAndWindow(int argc, char** argv, const char* windowsname)
{
//Initializes GLUT
glutInit(&argc, argv);

//GLUT_DOUBLE -- double buffer rendering
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);

//Initializa Windows info
glutInitWindowSize(1024, 768);
glutInitWindowPosition(100,100);

glutCreateWindow(windowsname);

InitializeGlutCallback();

//Initialize glew
GLenum res = glewInit();
if(res != GLEW_OK)
{
cout<<"Error: "<<glewGetErrorString(res)<<endl;
}

//Clear framebuffer before new draw call
glClearColor(0.0f,0.0f,0.0f,0.0f);

CreateVertexBuffer();

ShaderProgram = glCreateProgram();

CompileShader(ShaderProgram, pVSFileName, GL_VERTEX_SHADER);

CompileShader(ShaderProgram, pFSFileName, GL_FRAGMENT_SHADER);

LinkAndUseShaderProgram(ShaderProgram);

//enable program to enter the window event loop
glutMainLoop();
}

int main(int argc, char** argv)
{
InitializeGlutAndWindow(argc, argv, "UsingOpenGL");
return 0;
}

vsshader.vs

1
2
3
4
5
6
7
8
#version 330

layout (location = 0) in vec3 Position;

void main()
{
gl_Position = vec4(Position.x,Position.y,Position.z, 1.0);
}

fsshader.fs

1
2
3
4
5
6
7
8
#version 330

out vec4 FragColor;

void main()
{
FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}

final effect:
UsingShader

上述只使用到了Vertex Shader和Framgment Shader, 后续还会讲到其他Shader的使用。

Uniform Variables

Uniform variables are used to communicate with your vertex or fragment shader from “outside”.

Uniform variables are read-only and have the same value among all processed vertices. You can only change them within your C++ program.

从上面可以看出Uniform变量主要用于Vertex和Fragment Shader,并且对于所有传入的顶点值都不变,只能通过C++一侧去改变Uniform Variable的值。

接下来我们看看Uniform Variable是如何应用在Shader中的:
使用Uniform Variable主要有以下几个步骤:

  1. Obtain uniform variable location after Link Shader Program
1
2
3
gScaleLocation = glGetUniformLocation(ShaderProgram, "gScale");

assert(gScaleLocation != 0xFFFFFFFF);
  1. Set uniform variable value
    
1
2
3
gScale += 0.01f;

glUniform1f(gScaleLocation, sinf(gScale));
  1. Define uniform variable in Shader
1
2
3
4
5
6
7
8
9
10
#version 330

uniform float gScale;

layout (location = 0) in vec3 Position;

void main()
{
gl_Position = vec4(gScale * Position.x,gScale * Position.y,Position.z, 1.0);
}

final result:
UniformVariable

Interpolation

the interpolation that the rasterizer performs on variables that come out of the vertex shader.

在OpenGL的渲染管线里,在Fragment Shader执行之前会进行rasterizer,rasterizer会计算出各个三角形顶点之间的像素颜色数据,然后我们可以通过Fragment Shader对于光栅化的后颜色数据做进一步的处理。

这一章节主要看看我们是如何在Vertex Shader和Fragment Shader中如何对顶点数据和各像素信息做处理和数据传递的。(这里我们直接在VS中算出颜色信息直接传递到FS中去做处理)
要想从VS传递数据到FS,我们需要在Vertex Shader中定义关键词out的变量,并在Fragment Shader定义对应的关键词in的变量。

vsshader.vs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#version 330

uniform float gScale;

layout (location = 0) in vec3 Position;

out vec4 Color;

void main()
{
gl_Position = vec4(gScale * Position.x,gScale * Position.y,Position.z, 1.0);

Color = abs(gl_Position);
}

fsshader.fs

1
2
3
4
5
6
7
8
9
10
#version 330

out vec4 FragColor;

in vec4 Color;

void main()
{
FragColor = Color;
}

final result:
Interpolation

从上面可以看出在VS中算出颜色信息后,经过光栅化,三角形顶点之间的颜色信息被计算出来,最终传到FS中并作为最终颜色信息输出到屏幕上。

Coordinate Transformations & Perspective Projection

这一章主要是学习矩阵在3D图形中的使用和了解物体时怎样被显示到正确的屏幕位置的。
Note:
下列推导是基于OpenGL的列向量而非DX的行向量

在解释如何使用矩阵去进行向量变换之前,我们先来看看为什么矩阵可以实现向量变换?
下列学习参考《3D 数学基础:图形与游戏开发》
一个3维向量可以解释成3个基向量上平移后的组合(p,q,r为三个基向量):
V = x × p + y × q + z × r;

当一个向量乘以矩阵的时候:

1
2
3
4
5
6
7
8
9
    [ p ]   [px  py  pz]
M = [ q ] = [qx qy qz]
[ r ] [rx ry rz]

V = [x y z]

[px py pz]
V * M = [x y z] [qx qy qz] = [x*px + y*qx + z*rx x*py + y*qy + z*ry x*pz + y*qz + z*rz] = x*p + y*q + z*r
[rx ry rz]

“如果把矩阵的行解释为坐标系的基向量,那么乘以该矩阵就相当于执行了一次坐标系转换。若有a*M = b,我们就可以说,M将a转换到b。”

从上面我们可以看出矩阵是如何做到对于向量的坐标系转换的。

那么为什么我们后面用到的矩阵都是4×4而不是3×3的了?
44的矩阵我们叫做齐次矩阵。齐次矩阵出现的原因主要是除了记法方便,更重要的是因为33的变换矩阵只能表示线性变换,而4*4齐次矩阵能够表示线性变换和非线性变换。

那么这里我们来了解下什么是线性变换?
线性变换的满足下列公式:
F(a+b) = F(a) + F(b)
F(ka) = k × F(a)

因为线性变换不包含平移,所以这也是4×4齐次矩阵的出现的原因。

了解了使用矩阵的原因和为什么使用4×4齐次矩阵的原因后,让我们来看看,我们是如何通过矩阵来实现3D图形里的实现物体的坐标系变换的。
M(m-w) – 物体坐标系到世界坐标系
M(w-v) – 世界坐标系到观察坐标系
M(v-p) – 投影变换

V’ = V × M(m-w) × M(w-v) × M(v-p)

因为矩阵乘法满足结合律
N = M(m-w) × M(w-v) × M(v-p)
V’ = V × (M(m-w) × M(w-v) × M(v-p)) = V * N
所以我们只需要求出所有坐标系变换矩阵的乘积后再对V进行操作即可。

因为单个矩阵存储着一系列的变换,而这些变换可以通过多个单个变换组合而成,所以下列式子是成立的
M = S(scale) × R(rotation) × T(translation)

但这里有个比较关键的点,S,R,T直接的乘法顺序,矩阵是不满足交换律的,我们必须按S * R * T的顺序,原因参考下面:
One reason order is significant is that transformations like rotation and scaling are done with respect to the origin of the coordinate system. Scaling an object that is centered at the origin produces a different result than scaling an object that has been moved away from the origin. Similarly, rotating an object that is centered at the origin produces a different result than rotating an object that has been moved away from the origin.

从上面可以看出,之所必须按S * R * T的顺序是因为S和R都是针对坐标系原点进行的,一旦先执行T,那么相对于坐标系原点的位置就会有所变化,这之后再做S和R就会出现不一样的表现。

因为OpenGL是列向量是左乘,所以在OpenGL中顺序如下:
V’ = T × R × S × V

DX中顺序如下:
V’ = V × S × R × T

获取最终的M(m-w)的代码实现如下:

1
2
3
4
5
6
7
8
9
10
11
const Matrix4f& Pipeline::GetWorldTrans()
{
Matrix4f ScaleTrans, RotateTrans, TranslationTrans;

ScaleTrans.InitScaleTransform(m_scale.x, m_scale.y, m_scale.z);
RotateTrans.InitRotateTransform(m_rotateInfo.x, m_rotateInfo.y, m_rotateInfo.z);
TranslationTrans.InitTranslationTransform(m_worldPos.x, m_worldPos.y, m_worldPos.z);

m_Wtransformation = TranslationTrans * RotateTrans * ScaleTrans;
return m_Wtransformation;
}

V’ = V × M(m-w) × M(w-v) × M(v-p)
我们知道了M(m-w)是如何计算出来的了,接下来我们要了解M(w-v) – 世界坐标系到观察坐标系
在了解如何从世界坐标系转换到观察坐标系之前我们先来看看摄像机的定义:
位置 – (x,y,z)
N – The vector from the camera to its target.(look at 朝向)
V – When standing upright this is the vector from your head to the sky.(垂直于N向上的向量)
U – This vector points from the camera to its “right” side”.(在N和V定了之后可以算出Camera的向右的向量)

摄像机坐标系和世界坐标系:
CameraCoordinateTranslation

要想得到物体从世界坐标系转换到摄像机坐标系,其实就是个坐标系转换的问题。
我们首先把摄像机移动到世界坐标原点(移动摄像机位置即可):
[ 1 0 0 -x ]
[ 0 1 0 -y ]
[ 0 0 1 -z ]
[ 0 0 0 1 ]

这样一来考虑如何变化坐标系即可:
CameraCoordinate2
通过N,V,U,我们已经能够得出X(camera),Y(camera),Z(camera)3个基向量了。
还记得我们之前说的 – “如果把矩阵的行解释为坐标系的基向量,那么乘以该矩阵就相当于执行了一次坐标系转换。若有a*M = b,我们就可以说,M将a转换到b。”
所以:

1
2
3
4
[ Ux Uy Uz 0 ]    [X(world)]    [X(camera)]
[ Vx Vy Vz 0 ] [Y(world)] [Y(camera)]
[ Nx Ny Nz 0 ] * [Z(world)] = [Z(camera)]
[ 0 0 0 1 ] [ 1 ] [ 1 ]

结合前面提到的先把摄像机移动到世界原点,得出:

1
2
3
4
         [ Ux Uy Uz 0 ]   [ 1 0 0 -x ]
M(w-v) = [ Vx Vy Vz 0 ] * [ 0 1 0 -y ]
[ Nx Ny Nz 0 ] [ 0 0 1 -z ]
[ 0 0 0 1 ] [ 0 0 0 1 ]

M(w-v)的代码实现如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
void Matrix4f::InitTranslationTransform(float x, float y, float z)
{
m[0][0] = 1.0f; m[0][1] = 0.0f; m[0][2] = 0.0f; m[0][3] = x;
m[1][0] = 0.0f; m[1][1] = 1.0f; m[1][2] = 0.0f; m[1][3] = y;
m[2][0] = 0.0f; m[2][1] = 0.0f; m[2][2] = 1.0f; m[2][3] = z;
m[3][0] = 0.0f; m[3][1] = 0.0f; m[3][2] = 0.0f; m[3][3] = 1.0f;
}

void Matrix4f::InitCameraTransform(const Vector3f& Target, const Vector3f& Up)
{
Vector3f N = Target;
N.Normalize();
Vector3f U = Up;
U.Normalize();
U = U.Cross(N);
Vector3f V = N.Cross(U);

m[0][0] = U.x; m[0][1] = U.y; m[0][2] = U.z; m[0][3] = 0.0f;
m[1][0] = V.x; m[1][1] = V.y; m[1][2] = V.z; m[1][3] = 0.0f;
m[2][0] = N.x; m[2][1] = N.y; m[2][2] = N.z; m[2][3] = 0.0f;
m[3][0] = 0.0f; m[3][1] = 0.0f; m[3][2] = 0.0f; m[3][3] = 1.0f;
}

const Matrix4f& Pipeline::GetViewTrans()
{
Matrix4f CameraTranslationTrans, CameraRotateTrans;

CameraTranslationTrans.InitTranslationTransform(-m_camera.Pos.x, -m_camera.Pos.y, -m_camera.Pos.z);
CameraRotateTrans.InitCameraTransform(m_camera.Target, m_camera.Up);

m_Vtransformation = CameraRotateTrans * CameraTranslationTrans;

return m_Vtransformation;
}

这样一来M(w-v)也就实现了,接下来让我们看看M(v-p)是如何计算出来的吧。
M(v-p)这的p有多种投影方式,这里我只以Perspective Projection为例。
在转换到Camera坐标系后,我们还需要通过透视投影才能将3D物体映射到2D平面上。
Perspective Projection主要由下列四部分决定:

  1. The aspect ratio - the ratio between the width and the height of the rectangular area which will be the target of projection.
  2. The vertical field of view.
  3. The location of the near Z plane.
  4. The location of the far Z plane.

这一章的推导可以参考透视投影详解
一开始推导过程中不明白的一点是:

1
2
3
           1
Z'' = a * --- + b
Pz

后来看了《Mathematics for 3D Game Programming and Computer Grahpics 3rd section》的5.4.1 Depth Interpolation后明白了,光栅化的时候对于深度的运算证明了是对Z的倒数进行插值来得到Z的值的。
所以上述公式是成立的。

经过一系列推导后,我们得出了:
PerspectiveProject1
PerspectiveProject2
PerspectiveProject3
PerspectiveProject4

Note:
上述推导是针对DX而言的,DX和OpenGL在透视投影矩阵推导上面有一个很重要的不同,那就是DX变换后z坐标范围是[0,1],而OpenGL的z坐标范围是[-1,1]

所以如果我们把z坐标[-1,1]带入下式推导:

1
2
3
           1
Z'' = a * --- + b
Pz

我们将得出OpenGL的透视投影矩阵如下(下面的θ = FOV/2):

1
2
3
4
    [cotθ/Aspect        0               0                  0     ]
[ 0 cotθ 0 0 ]
M = [ 0 0 (-n-f)/(n-f) 2*f*n/(n-f)]
[ 0 0 1 0 ]

所以OpenGL里M(v-p)的代码实现如下:

1
2
3
4
5
6
7
8
9
10
11
void Matrix4f::InitPersProjTransform(const PersProjInfo& p)
{
const float ar = p.Width / p.Height;
const float zRange = p.zNear - p.zFar;
const float tanHalfFOV = tanf(ToRadian(p.FOV / 2.0f));

m[0][0] = 1.0f/(tanHalfFOV * ar); m[0][1] = 0.0f; m[0][2] = 0.0f; m[0][3] = 0.0;
m[1][0] = 0.0f; m[1][1] = 1.0f/tanHalfFOV; m[1][2] = 0.0f; m[1][3] = 0.0;
m[2][0] = 0.0f; m[2][1] = 0.0f; m[2][2] = (-p.zNear - p.zFar)/zRange ; m[2][3] = 2.0f*p.zFar*p.zNear/zRange;
m[3][0] = 0.0f; m[3][1] = 0.0f; m[3][2] = 1.0f; m[3][3] = 0.0;
}

Keyboard && Mouse Control

这一章节主要是讲通过Glut提供的API如何去响应键盘和鼠标的控制。
本章节里面主要用到了两个类:

  1. Pipeline
  2. Camera

Pipeline主要是针对上一章节我们对于如何通过矩阵变化把物体显示到2D平面上的抽象:
M(m-w) – 物体坐标系到世界坐标系
M(w-v) – 世界坐标系到观察坐标系
M(v-p) – 投影变换

N = M(m-w) × M(w-v) × M(v-p)
V’ = V × M(m-w) × M(w-v) × M(v-p) = V × N

Pipeline只要知道了物体S,R,T信息就可以得出M(m-w),知道了Camera信息就可以得出M(w-v),知道了透视投影信息就可以得出M(v-p)。

我们通过修改摄像机的相关信息得出在移动摄像机后的N,并作用于物体,这样一来就能使物体显示在正确位置了。

而Camera类是对摄像机的抽象。

Pipeline和Camera的源代码可在Modern OpenGL Tutorials下载

这里我只关心针对键盘和鼠标的响应的相关代码:
Glut里针对键盘和鼠标的API主要是下列几个:

  1. glutSpecialFunc() – 主要是针对特殊按键比如F1
  2. glutKeyboardFunc() – 主要是针对普通按键比如A,B,C……
  3. glutPassiveMotionFunc() – 主要是针对在没有鼠标按键被按下的情况下,鼠标在窗口内移动的情况
  4. glutMotionFunc() – 主要是针对在鼠标按键被按下的情况下,鼠标在窗口内移动的情况

相关代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
static void SpecialKeyboardCB(int Key, int x, int y)
{
OGLDEV_KEY OgldevKey = GLUTKeyToOGLDEVKey(Key);
pGameCamera->OnKeyboard(OgldevKey);
}

static void KeyboardCB(unsigned char Key, int x, int y)
{
switch (Key) {
case 'q':
glutLeaveMainLoop();
}
}

static void PassiveMouseCB(int x, int y)
{
pGameCamera->OnMouse(x, y);
}

static void InitializeGlutCallbacks()
{
......
glutSpecialFunc(SpecialKeyboardCB);
glutPassiveMotionFunc(PassiveMouseCB);
glutKeyboardFunc(KeyboardCB);
}

final result:
MouseAndKeyboardStudy

Texture Mapping

“Textures are composed of texels, which often contain color values.”

“Textures are bound to the OpenGL context via texture units, which are represented as binding points named GL_TEXTURE0 through GL_TEXTUREi where i is one less than the number of texture units supported by the implementation.”

The textures are accessed via sampler variables which were declared with dimensionality that matches the texture in shader

在真正接触Texutre之前,让我们理解下下列几个重要的概念:

  1. Texture object – contains the data of the texture image itself, i.e. the texels(可以看出Texture object才是含有原始数据信息的对象)

  2. Texture unit – texture object bind to a ‘texture unit’ whose index is passed to the shader. So the shader reaches the texture object by going through the texture unit.(我们访问texture数据信息并不是通过texture object,而是在shader里通过访问特定索引的texture unit去访问texture object里的数据)

  3. Sampler Object – configure it with a sampling state and bind it to the texture unit. When you do that the sampler object will override any sampling state defined in the texture object.(Sampler Object一些sampling的配置信息,当用于texture object时会覆盖texture object里的原始sampler设定)

  4. Sampler uniform – corresponding to handle of texture unit(用于在Shader里访问texture unit,texture unit和texture object绑定,也就间接的访问了texture的原始数据)

Relationship between texture object, texture unit, sampler object and sampler uniform
RelationshipBetweenThem

因为OpenGL没有提供从图片加载texture的API,所以这里我们需要使用第三方库来完成这项工作,这里教程上使用的是ImageMagick。
ImageMagick主要是为了从多种格式的资源文件中读取原始数据,在我们指定glTexImage2D()的原始数据的时候提供所在内存地址。

Steps to use texture mapping:

  1. Create a texture object and load texel data into it
    glGenTextures() – gen texture object
    glBindTexture() – Tells OpenGL the texture object we refer to in all the following texture related calls, until a new texture object is bound.
    glTexImage2D() – load texel data into texture object

  2. Include texture coordinates with your vertices

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Vertex Vertices[4] = { Vertex(Vector3f(-1.0f, -1.0f, 0.5773f), Vector2f(0.0f, 0.0f)),
Vertex(Vector3f(0.0f, -1.0f, -1.15475f), Vector2f(0.0f, 0.0f)),
Vertex(Vector3f(1.0f, -1.0f, 0.5773f), Vector2f(0.0f, 0.0f)),
Vertex(Vector3f(0.0f, 1.0f, 0.0f), Vector2f(0.5f, 1.0f)) };
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);

//我们把顶点对应的纹理坐标信息写到vertex数据里
//说道纹理坐标就不得不提一下Texture UV纹理坐标了,Texture的图片被映射到0-1的二维坐标,图见后面:

glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)12);

//然后通过指定如何解析顶点数据里面的的数据在Shader里访问vertex的texture纹理坐标信息去sample出texture数据信息
  1. Associate a texture sampler with each texture map you intend to use in your shader
    glTexParameterf() – Texture采样方式的配置
    还记得我们之前讲到的Sampler object吗?这里的配置就好比我们在sampler object里配置后再作用于特定的texture object
    这里我就不说关于采样方式配置相关的内容了(采样方式会决定最终像素的计算方式),这里值得一提的是mipmap的概念。
    mipmap主要是由于物体在场景中因为距离的缘故会在屏幕显示的大小有变化,如果我们在物体很远,只需要显示很小一块的时候还依然采用很大的纹理贴图,最终显示在屏幕上的纹理会很不清晰(失真)。为了解决这个问题,mipmap应运而生,通过事先生成或指定多个级别的同一纹理贴图,然后在程序运作的过程中通过计算算出应该使用哪一个等级的纹理贴图来避免大纹理小色块失真的问题。
    我们可以手动通过:
    glTexStorage2D() && glTexSubImage2D() 去手动指定各级纹理贴图
    也可以通过:
    glGenerateMipmap() – 自动去生成对应的mipmap纹理贴图
    而程序在实际运作过程中如何去计算Mipmap Level这里就不做介绍了,详细参考《OpenGL Programming Guide 8th Edition》的Calculating the Mipmap章节
    相关函数:
    textureLod()
    textureGrad()

  2. Active texture unit and bind texture object to it
    glActiveTexture() – 激活特定的texture unit然后绑定特定texture object到特定texture unit上
    glBindTexture() – 绑定特定的texture object到texture unit上

  3. Retrieve the texel values through the texture sampler from your shader
    首先我们在程序中指定了我们即将访问的Texture unit

1
2
3
4
gSampler = glGetUniformLocation(ShaderProgram, "gSampler");
assert(gSampler != 0xFFFFFFFF);

glUniform1i(gSampler, 0);

Note:
“The important thing to note here is that the actual index of the texture unit is used here, and not the OpenGL enum GL_TEXTURE0 (which has a different value).”

vsshader.vs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#version 330

uniform mat4 gWVP;

layout (location = 0) in vec3 Position;
layout (location = 1) in vec2 TexCoord;

out vec2 TexCoord0;

void main()
{
gl_Position = gWVP * vec4(Position, 1.0);

TexCoord0 = TexCoord;
}

fsshader.fs

1
2
3
4
5
6
7
8
9
10
11
12
#version 330

in vec2 TexCoord0;

out vec4 FragColor;

uniform sampler2D gSampler;

void main()
{
FragColor = texture2D(gSampler, TexCoord0.xy);
}

从上面可以看出我们在fragment shader里,通过传入的gSampler确认了使用哪一个texture unit,通过传入的TexCoord0确认了对应的纹理坐标信息去获取对应的texture信息,然后最终通过texture2D从texture里取得了特定的颜色信息作为输出,就这样纹理图片的信息就作用在了三角形上并显示出来。

TextureCoordinate

final result:
BasicTexture

下面我简单测试了下两个Texture计算出最终纹理信息:
TextureStudy.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
static void InitializeTextureInfo()
{
gSampler = glGetUniformLocation(ShaderProgram, "gSampler");
assert(gSampler != 0xFFFFFFFF);

gSampler2 = glGetUniformLocation(ShaderProgram, "gSampler2");
assert(gSampler2 != 0xFFFFFFFF);

//Specify the inder of texture unit we will use in shader
glUniform1i(gSampler, 0);

glUniform1i(gSampler2, 1);

pTexture = new Texture(GL_TEXTURE_2D, "../Content/texture1.png");

if(!pTexture->Load())
{
return ;
}

pTexture2 = new Texture(GL_TEXTURE_2D, "../Content/texture2.jpg");

if(!pTexture2->Load())
{
return ;
}
}

static void RenderCallbackCB()
{
......

pTexture->Bind(GL_TEXTURE0);

pTexture2->Bind(GL_TEXTURE1);

......
}

fsshader.fs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#version 330

in vec2 TexCoord0;

out vec4 FragColor;

uniform sampler2D gSampler;

uniform sampler2D gSampler2;

void main()
{
FragColor = 0.5 * texture2D(gSampler, TexCoord0.xy) + 0.5 * texture2D(gSampler2, TexCoord0.xy);
}

原始图片分别为:
Texture1
Texture2

final result:
MultipleTexture

Point Sprites:
待理解学习……

Rendering to Texture Maps:
待理解学习……

Sumary:

  1. Use immutable texture storage for textures wherever possible – When a texture is marked as immutable, the OpenGL implementation can make certain assumptions about the validity of a texture object (尽量使用不可变的texture storage, 这样OpenGL可以确保texture的有效性)

  2. Create and initialize the mipmap chain for textures unless you have a good reason not to – improve the image quality of your program’s rendering, but also will make more efficient use of the caches in the graphics processor (为了渲染效率,减轻GPU负担,尽可能为texture创建mipmap)

  3. Use an integer sampler in your shader when your texture data is an unnormalized integer and you intend to use the integer values it contains directly in the shader (尽量在shader里使用integer类型的sampler)

Note:
“The maximum number of texture units supported by OpenGL
can be determined by retrieving the value of the GL_MAX_COMBINED_
TEXTURE_IMAGE_UNITS constant, which is guaranteed to be at least 80 as
of OpenGL 4.0.”

Proxy texture – used to test the capabilities of the OpenGL implementation when certain limits are used in combination with each other.

Light and Shadow

光源类型:

  1. Ambient Light (环境光) – 环境光只影响ambient
  2. Directional Light (方向光) – 方向光会影响diffuse & specular
  3. Point Light (点光源) – 与方向光的区别是有attenuation(衰弱)而且点光源照射物体的表面的方向不一样,同样会影响diffuse & specular

传统的光照组成:
Ambient (环境光) – 与光照的方向无关

1
2
3
FragColor = texture2D(gSampler, TexCoord0.xy) *
vec4(gDirectionalLight.Color, 1.0f) *
gDirectionalLight.AmbientIntensity;

因为环境光与光照方向无关,只需考虑方向光的颜色和方向光所占比重,所以基本上主要计算归结于上述运算。

final effect:
Ambient

Note:
这里源代码里有个错误,在子类重写虚函数的KeyboardCB的时候,由于参数写的不对,没能正确重写该虚函数而没有被调到。
错误:
virtual void KeyboardCB(OGLDEV_KEY OgldevKey);
正确:
virtual void KeyboardCB(OGLDEV_KEY OgldevKey, OGLDEV_KEY_STATE OgldevKeyState = OGLDEV_KEY_STATE_PRESS)

Diffuse (漫反射光) – 与光照的方向和顶点normal有关
因为漫反射光要考虑光照的方向和物体的顶点法线,所以我们需要在shader里进行计算之前要把顶点的法线算出来然后传入Shader进行计算。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
void CalcNormals(const unsigned int* pIndices, unsigned int IndexCount, Vertex* pVertices, unsigned int VertexCount)
{
for (unsigned int i = 0 ; i < IndexCount ; i += 3) {
unsigned int Index0 = pIndices[i];
unsigned int Index1 = pIndices[i + 1];
unsigned int Index2 = pIndices[i + 2];
Vector3f v1 = pVertices[Index1].m_pos - pVertices[Index0].m_pos;
Vector3f v2 = pVertices[Index2].m_pos - pVertices[Index0].m_pos;
Vector3f Normal = v1.Cross(v2);
Normal.Normalize();

pVertices[Index0].m_normal += Normal;
pVertices[Index1].m_normal += Normal;
pVertices[Index2].m_normal += Normal;
}

for (unsigned int i = 0 ; i < VertexCount ; i++) {
pVertices[i].m_normal.Normalize();
}
}

vsshader.vs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#version 330

layout (location = 0) in vec3 Position;
layout (location = 1) in vec2 TexCoord;
layout (location = 2) in vec3 Normal;

uniform mat4 gWVP;
uniform mat4 gWorld;

out vec2 TexCoord0;
out vec3 Normal0;

void main()
{
gl_Position = gWVP * vec4(Position, 1.0);
TexCoord0 = TexCoord;
Normal0 = (gWorld * vec4(Normal, 0.0)).xyz;
}

注意“Normal0 = (gWorld * vec4(Normal, 0.0)).xyz;” – 因为我们对于顶点法线的计算是基于物体没有移动变化之前的,所以我们真正计算所用的顶点法线需要通过世界坐标系矩阵的转换。

fsshader.fs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#version 330

in vec2 TexCoord0;
in vec3 Normal0;

out vec4 FragColor;

struct DirectionalLight
{
vec3 Color;
float AmbientIntensity;
float DiffuseIntensity;
vec3 Direction;
};

uniform DirectionalLight gDirectionalLight;
uniform sampler2D gSampler;

void main()
{
vec4 AmbientColor = vec4(gDirectionalLight.Color, 1.0f) *
gDirectionalLight.AmbientIntensity;

float DiffuseFactor = dot(normalize(Normal0), -gDirectionalLight.Direction);

vec4 DiffuseColor;

if (DiffuseFactor > 0) {
DiffuseColor = vec4(gDirectionalLight.Color, 1.0f) *
gDirectionalLight.DiffuseIntensity *
DiffuseFactor;
}
else {
DiffuseColor = vec4(0, 0, 0, 0);
}

FragColor = texture2D(gSampler, TexCoord0.xy) *
(AmbientColor + DiffuseColor);
}

从“float DiffuseFactor = dot(normalize(Normal0), -gDirectionalLight.Direction);”可以看出,光照的方向和顶点法线之间的角度直接决定了漫反射光所占的比重。
参见Lambert’s cosine law

final effect:
Diffuse

Note:
这里我按照官网和源代码的方式按自己的方式写了,但不知道为何DiffuseFactor得出的值当我去做if else判断等,无论是>0,<0,==0都不会进去,都只会进入最终的else。
我通过gDEBugger查看了uniform的值没有问题,C++那一侧所有的相关计算的值也都正确,这里弄了半天也没有解决,本来准备用Nsight调试GLSL的,发现我的笔记本好像不被支持。
个人最终认为是顶点的法线在传递给Shader的时候出问题了,导致dot之后计算DiffuseFactor出问题,虽然在VS一侧下的断点查看normal是正确的,但不确定为什么shader一侧获得的值会有问题。(这一结论主要是因为同样的shader代码在加载现有模型的时候起作用发现的)

Specular (镜面反射光) – 与光照的方向和eye观察还有顶点normal有关
Specular在Diffuse的基础上,还要多考虑一个因素(观察者所在位置,如果观察者正好在反光处,那么该观察点就会比在其他位置的观察点观察同一位置的看起来更亮)。但现实中并不是所有物体都有这一特性,所以Specular更针对物体材质而言而非光线本身。

看一下下图:
SpecularModle
I是光线入射方向
N是平面法线
R是完美反射后的光线
V是观察者观察方向
a是观察者方向与完美反射光线的夹角
从上图可以看出当观察者所在观察角度V与R越接近时,我们可以理解为观察者观察该点会达到最大量值

我们计算出R主要是通过I和N和-N之间的计算:
详情见下图:
SpecularModle2
R = I + V
V = 2 * N * dot(-N,I)
这里值得一提的是OpenGL里提供了reflect方法,通过光线和平面法线就能直接算出反射R

让我们直接看一下Specular的计算公式:
SpecularCalculation
M – 是跟物体材质有关的,材质决定了specular的反光系数
p
(R.V) – 是指观察者所在位置和完美反射光线之间夹角的P次方,P是shininess factor(発光系数之类的)
上述换成代码如下:

1
2
3
4
5
6
7
8
9
10
vec3 VertexToEye = normalize(gEyeWorldPos - WorldPos0);                     
vec3 LightReflect = normalize(reflect(gDirectionalLight.Direction, Normal));
float SpecularFactor = dot(VertexToEye, LightReflect);
if (SpecularFactor > 0) {
SpecularFactor = pow(SpecularFactor, gSpecularPower);
SpecularColor = vec4(gDirectionalLight.Color * gMatSpecularIntensity * SpecularFactor, 1.0f);
}

FragColor = texture2D(gSampler, TexCoord0.xy) *
(AmbientColor + DiffuseColor + SpecularColor);

Limitations of the Classic Lighting Model: (传统光源的不足之处)
Big Missing:

  1. Assume no other objects blocking the path of the lights to the surface (假设光不会被物体遮挡)
  2. Accurate ambient lighting (统一固定精确的环境光,现实中是由削弱的(attenuation))

了解了光源的三个组成,也了解了传统光源的不足,让我们来看看另一种光照Point Light:
Point Light是有伴随距离而削弱(attenuation)的光源
公式如下:
PointLightFormulation

在实现Point Light的计算的时候,我们只需要把光照的方向根据Point Light位置算一下,并在最后除以根据物体位置与Point Light的光源位置算出的attenuation即可、

1
2
3
4
5
6
7
8
9
10
11
12
13
vec4 CalcPointLight(int Index, vec3 Normal)                                                 
{
vec3 LightDirection = WorldPos0 - gPointLights[Index].Position;
float Distance = length(LightDirection);
LightDirection = normalize(LightDirection);

vec4 Color = CalcLightInternal(gPointLights[Index].Base, LightDirection, Normal);
float Attenuation = gPointLights[Index].Atten.Constant +
gPointLights[Index].Atten.Linear * Distance +
gPointLights[Index].Atten.Exp * Distance * Distance;

return Color / Attenuation;
}

接下来我们看一下Spot Light:
Spot Light和Point Light的主要区别在于,Spot Light定义了一个可影响的范围Cone和其垂直照射的方向。
而这个Cone通过Cutoff来定义:
Cutoff – “ The cutoff value represents the maximum angle between the light direction and the light to pixel vector for pixels that are under the influence of the spot light.”
见下图:
SpotLight

通过计算出所在点是否在Spot Light的Cone里去决定是否影响该颜色值。
这里需要关注的一点是关于如何映射Cutoff值到[0,1],因为一般来说Cutoff的值不可能设置到0(即90度),所以我们要想计算边缘化削弱效果,我们需要对Cutoff的值进行线性插值。
推导见如下(来源之OpenGL Tutorial 21):
SpotLightCutoffMapping

知道了怎么确认是否影响该点颜色,以及如何插值获取影响值,那么最终归结代码见如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
vec4 CalcSpotLight(struct SpotLight l, vec3 Normal)
{
vec3 LightToPixel = normalize(WorldPos0 - l.Base.Position);
float SpotFactor = dot(LightToPixel, l.Direction);
//计算所在点是否在Spot Light的Cone里去决定是否影响该颜色值。
if (SpotFactor > l.Cutoff) {
vec4 Color = CalcPointLight(l.Base, Normal);
//插值计算影响值
return Color * (1.0 - (1.0 - SpotFactor) * 1.0/(1.0 - l.Cutoff));
}
else {
return vec4(0,0,0,0);
}
}

More Advanced Lighting Model:
Hemisphere Lighting:
The idea behind hemisphere lighting is that we model the illumination as two hemispheres. The upper hemisphere represents the sky and the lower hemisphere represents the ground

Imaged-Based Lighting:
“It is often easier and much more efficient to sample the lighting in such environments and store the results in one or more environment maps”

Lighting with Spherical Harmonics:
“This method reproduces accurate diffuse reflection, based on the content of a light probe image, without accessing the light probe image at runtime”

详情参考

总结:
Ambient (环境光) – 与光照的方向无关
环境光不会削弱不考虑方向,所以只需考虑光照颜色和平面颜色即可
Diffuse (漫反射光) – 与光照的方向和顶点normal有关
光照的方向和顶点法线之间的角度直接决定了漫反射光所占的比重。
Specular (镜面反射光) – 与光照的方向和eye观察还有顶点normal有关
观察者所在位置和光照方向和法线会决定观察者所在位置的Specular反射比例,物体材质会决定Specular反射系数。
最终通过计算环境中所有光源对物体的ambient, diffuse, specular影响计算出物体的最终color

接下来我们看一个真实渲染过程中比较重要的技术 – Shadow Mapping
Shadow Mapping – Uses a depth texture to determine whether a point is lit or not.

Shadow mapping is a multipass technique that uses depth textures to provide a solution to rendering shadows (核心思想是通过比较通过光源点观察保存的深度信息(depth texture)和从观察点观察的深度信息来判断哪些点是shadowed,哪些是unshadowed – 注意比较的是通过映射到2D depth texture后的信息)
A key pass is to view the scene from the shadow-casting light source rather than from the final viewpoint
Two passes:

  • First Pass
    Shadow map – by rendering the scene’s depth from the point of the light into a depth texture, we can obtain a map of the shadowed and unshadowed points in the scene
    在第一个pass中,我按照事例代码中写了,但发现最后显示的是纯白色的图像。
    后来就不断去查问题。
    ShaowMapFirstPassFailed
    首先,我怀疑depth texture是不是没有生成成功?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// Create the FBO
glGenFramebuffers(1, &m_fbo);

// Create the depth buffer
glGenTextures(1, &m_shadowMap);
glBindTexture(GL_TEXTURE_2D, m_shadowMap);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

glBindFramebuffer(GL_FRAMEBUFFER, m_fbo);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_shadowMap, 0);

// Disable writes to the color buffer
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);

GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER);

if (Status != GL_FRAMEBUFFER_COMPLETE) {
printf("FB error, status: 0x%x\n", Status);
return false;
}

但上述代码没有报任何错误,通过gDebugger查看Texture的时候发现depth texture是生成成功了的。
ShaowMapFirstPassCreate
从上图仔细看,模型的深度信息时被生成到了FBO 1所绑定的Depth Texture中了的。

那么接下来,我就怀疑是不是我激活的Texture unit有错误?
以下是将Depth Texture渲染到一个平面上的代码。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
void ShadowMapFBO::BindForReading(GLenum TextureUnit)
{
glActiveTexture(TextureUnit);
glBindTexture(GL_TEXTURE_2D, m_shadowMap);
}

static void RenderPass()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glUniform1i(gTextureLocation, 0);

gShadowMapFBO.BindForReading(GL_TEXTURE0);

Pipeline p;
p.Scale(5.0f, 5.0f, 5.0f);
p.WorldPos(0.0f, 0.0f, 10.0f);
p.SetCamera(pGameCamera->GetPos(), pGameCamera->GetTarget(), pGameCamera->GetUp());
p.SetPerspectiveProj(gPersProjInfo);

glUniformMatrix4fv(gWVPLocation, 1, GL_TRUE, (const GLfloat*)p.GetWVPTrans());

gPQuade->Render();
}

MyTextureList
DemoTextureList
通过上图,我发现我自己的代码有三个Texture被生成,但Demo只有两个,并且我自己写的代码Enable的并非FBO 1绑定生成的Texture而是第三个Texture,所以这是我怀疑我在加载Mesh的时候加载了第三个Texture并将其绑定在了Texture unit 0,而我碰巧激活了这一个Texture unit。

由于我的mesh.cpp是沿用上一个tutorial的代码,所以我没有更新到最新教程的mesh代码。
下面是我所用的mesh.cpp的一个加载Texture的方法和Texture源代码加载的时候的方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
bool Mesh::InitMaterials(const aiScene* pScene, const std::string& Filename)
{
......

// Load a white texture in case the model does not include its own texture
if (!m_Textures[i]) {
m_Textures[i] = new Texture(GL_TEXTURE_2D, "../Content/white.png");

Ret = m_Textures[i]->Load();
}

......
}


void Mesh::Render()
{
.....

if (MaterialIndex < m_Textures.size() && m_Textures[MaterialIndex]) {
m_Textures[MaterialIndex]->Bind(GL_TEXTURE0);
}

.......
}
bool Texture::Load()
{
try {
m_image.read(m_fileName);
m_image.write(&m_blob, "RGBA");
}
catch (Magick::Error& Error) {
std::cout << "Error loading texture '" << m_fileName << "': " << Error.what() << std::endl;
return false;
}

glGenTextures(1, &m_textureObj);
glBindTexture(m_textureTarget, m_textureObj);
glTexImage2D(m_textureTarget, 0, GL_RGBA, m_image.columns(), m_image.rows(), 0, GL_RGBA, GL_UNSIGNED_BYTE, m_blob.data());
glTexParameterf(m_textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(m_textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture(m_textureTarget, 0);

return true;
}

void Texture::Bind(GLenum TextureUnit)
{
glActiveTexture(TextureUnit);
glBindTexture(m_textureTarget, m_textureObj);
}

从上面可以看出如果我加载的mesh没有含有贴图的话,我会指定他去默认加载white.png作为贴图,并且渲染的时候激活Texture unit 0并将该纹理绑定到Texture unit 0

这也就是为什么我后来调用下列代码出现了Active错误的texture的原因。

1
2
3
glUniform1i(gTextureLocation, 0);

gShadowMapFBO.BindForReading(GL_TEXTURE0);

所以在不改Texture和Mesh源代码的情况下,我只需要将生成的Texture unit绑定到GL_TEXTURE2并指定Shader去访问Texture unit 2即可。
将上述代码改为如下即可:

1
2
3
glUniform1i(gTextureLocation, 2);

gShadowMapFBO.BindForReading(GL_TEXTURE2);

ShadowMapFirstPassSuccessful

在渲染到Depth Texture的时候,主要是通过以下步骤:

  1. 创建新的FBO和Texture object
1
2
3
4
5
6
7
8
9
10
11
// Create the FBO
glGenFramebuffers(1, &m_fbo);

// Create the depth texture
glGenTextures(1, &m_shadowMap);
glBindTexture(GL_TEXTURE_2D, m_shadowMap);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
  1. 激活新的FBO并绑定Texture object到FBO的Depth buffer上
1
2
glBindFramebuffer(GL_FRAMEBUFFER, m_fbo);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_shadowMap, 0);
  1. 关闭颜色写入到新的FBO。因为我们只需要Depth信息,所以我们不需要写入颜色信息到新的FBO里。
1
glDrawBuffer(GL_NONE);
  1. 检查新的FBO的状态是否完整
1
2
3
4
5
6
GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER);

if (Status != GL_FRAMEBUFFER_COMPLETE) {
printf("FB error, status: 0x%x\n", Status);
return false;
}
  1. 激活新的FBO并清除Depth信息后以光线来源的角度draw call写入depth信息到新的FBO和绑定的depth texture里
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
static void ShadowMapPass()
{
gShadowMapFBO.BindForWriting();

glClear(GL_DEPTH_BUFFER_BIT);

Pipeline p;
p.Scale(0.1f, 0.1f, 0.1f);
p.Rotate(0.0f, gScale, 0.0f);
p.WorldPos(0.0f, 0.0f, 5.0f);
p.SetCamera(gSpotLight.Position, gSpotLight.Direction, Vector3f(0.0f, 1.0f, 0.0f));
p.SetPerspectiveProj(gPersProjInfo);

//Set uniform variable value
glUniformMatrix4fv(gWVPLocation, 1, GL_TRUE, (const GLfloat*)p.GetWVPTrans());

gPTank->Render();

glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
  • Second Pass
    Rendering the scene from the point of view of the viewer. Project the surface coordinates into the light’s reference frame and compare their depths to the depth recorded into the light’s depth texture. Fragments that are further from the light than the recorded depth value were not visible to the light, and hence in shadow

第二个pass的关键有下列几个点:

  1. 正常方式渲染时,通过传递Light的MVP去计算每一个顶点在光源角度观察时的投影位置信息。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
void RenderPass()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

m_pLightingEffect->Enable();

m_pLightingEffect->SetEyeWorldPos(m_pGameCamera->GetPos());

m_shadowMapFBO.BindForReading(GL_TEXTURE1);

Pipeline p;
p.SetPerspectiveProj(m_persProjInfo);

p.Scale(10.0f, 10.0f, 10.0f);
p.WorldPos(0.0f, 0.0f, 1.0f);
p.Rotate(90.0f, 0.0f, 0.0f);
p.SetCamera(m_pGameCamera->GetPos(), m_pGameCamera->GetTarget(), m_pGameCamera->GetUp());
m_pLightingEffect->SetWVP(p.GetWVPTrans());
m_pLightingEffect->SetWorldMatrix(p.GetWorldTrans());
p.SetCamera(m_spotLight.Position, m_spotLight.Direction, Vector3f(0.0f, 1.0f, 0.0f));
m_pLightingEffect->SetLightWVP(p.GetWVPTrans());
m_pGroundTex->Bind(GL_TEXTURE0);
m_pQuad->Render();

p.Scale(0.1f, 0.1f, 0.1f);
p.Rotate(0.0f, m_scale, 0.0f);
p.WorldPos(0.0f, 0.0f, 3.0f);
p.SetCamera(m_pGameCamera->GetPos(), m_pGameCamera->GetTarget(), m_pGameCamera->GetUp());
m_pLightingEffect->SetWVP(p.GetWVPTrans());
m_pLightingEffect->SetWorldMatrix(p.GetWorldTrans());
p.SetCamera(m_spotLight.Position, m_spotLight.Direction, Vector3f(0.0f, 1.0f, 0.0f));
m_pLightingEffect->SetLightWVP(p.GetWVPTrans());
m_pMesh->Render();
}

lighting.vs
#version 330

layout (location = 0) in vec3 Position;
layout (location = 1) in vec2 TexCoord;
layout (location = 2) in vec3 Normal;

uniform mat4 gWVP;
uniform mat4 gLightWVP;
uniform mat4 gWorld;

out vec4 LightSpacePos;
out vec2 TexCoord0;
out vec3 Normal0;
out vec3 WorldPos0;

void main()
{
......
//这里就是转换到以光源为摄像机角度的透视投影后的坐标信息
LightSpacePos = gLightWVP * vec4(Position, 1.0);
......
}
  1. 然后通过把光源角度下投影的位置信息转换到NDC space(设备坐标系,光栅化后xyz都映射到[-1,1]),这时就得到了顶点在光源角度下NDC的坐标信息。
1
2
3
4
5
6
7
//lighting.fs                                                 
float CalcShadowFactor(vec4 LightSpacePos)
{
//通过除以w我们可以得到NDC space的信息
vec3 ProjCoords = LightSpacePos.xyz / LightSpacePos.w;
......
}
  1. 最后通过转换纹理坐标映射到[0,1]去查询Depth texture中的深度信息和自身的z深度信息作比较,如果depth texture中值更小说明改点处于被遮挡区域应该是阴影部分。
1
2
3
4
5
6
7
8
9
10
11
12
13
float CalcShadowFactor(vec4 LightSpacePos)                                                 
{
......
vec2 UVCoords;
UVCoords.x = 0.5 * ProjCoords.x + 0.5;
UVCoords.y = 0.5 * ProjCoords.y + 0.5;
float z = 0.5 * ProjCoords.z + 0.5;
float Depth = texture(gShadowMap, UVCoords).x;
if (Depth < z + 0.00001)
return 0.5;
else
return 1.0
}

因为原本x,y,z在NDC space下是[-1,1],为了映射到[0,1],我们只需要按上述方法即可。
这样一来就得到NDC space下的纹理坐标信息和深度z信息,然后通过查询depth texture获取光源角度的深度信息和现有顶点在光源角度的深度信息做比较得出是否处于阴影的结论。

这里实现相当复杂就没有自己再去写一遍,具体参考Tutorial 24的源代码。
最终效果:
ShadowMap

Skybox

A skybox is a method of creating backgrounds to make a computer and video games level look bigger than it really is. When a skybox is used, the level is enclosed in a cuboid. (From wiki)
SkyboxTexture

在OpenGL中实现Skybox是通过Cubemap。

In order to sample from the cubemap we will use a 3D texture coordinate instead of the 2D coordinate

Skydome – A skybox which uses a sphere is sometimes called a skydome.

实现skybox主要有下列几点需要注意:

  1. 生成Cubemap texture,分别指定6个对应skybox的六个面类型的纹理数据
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
static const GLenum types[6] = {  GL_TEXTURE_CUBE_MAP_POSITIVE_X,
GL_TEXTURE_CUBE_MAP_NEGATIVE_X,
GL_TEXTURE_CUBE_MAP_POSITIVE_Y,
GL_TEXTURE_CUBE_MAP_NEGATIVE_Y,
GL_TEXTURE_CUBE_MAP_POSITIVE_Z,
GL_TEXTURE_CUBE_MAP_NEGATIVE_Z };

bool CubemapTexture::Load()
{
//生成cubemap texture
glGenTextures(1, &m_textureObj);
glBindTexture(GL_TEXTURE_CUBE_MAP, m_textureObj);

......

//指定对应skybox六个面类型的纹理数据
glTexImage2D(types[i], 0, GL_RGB, pImage->columns(), pImage->rows(), 0, GL_RGBA, GL_UNSIGNED_BYTE, blob.data());

......
}
  1. 渲染skybox的时候,需要把glCullFace和glDepthFunc设置成GL_FRONT和GL_LEQUAL(因为camera是放在skybox sphere内部的,而sphere的triangle是front face, 所以针对skybox sphere我们需要cull的是front而非back。为了使得skybox永远不会被clip掉,我们需要修改默认的glDepthFunc到GL_LEQUAL来确保在Z = 1的far平面也不会被clip。)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
void SkyBox::Render()
{
m_pSkyboxTechnique->Enable();

GLint OldCullFaceMode;
glGetIntegerv(GL_CULL_FACE_MODE, &OldCullFaceMode);
GLint OldDepthFuncMode;
glGetIntegerv(GL_DEPTH_FUNC, &OldDepthFuncMode);

//确保skybox sphere不被clip掉并且显示出正确的一面
glCullFace(GL_FRONT);
glDepthFunc(GL_LEQUAL);

.......

m_pMesh->Render();

glCullFace(OldCullFaceMode);
glDepthFunc(OldDepthFuncMode);
}
  1. 确保skybox深度检测时值永远在Z = 1的far平面(这样一来确保skybox深度检测永远失败,因为我们吧glDepthFunc设置成了GL_LEQUAL)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
skybox.vs
#version 330

layout (location = 0) in vec3 Position;

uniform mat4 gWVP;

out vec3 TexCoord0;

void main()
{
vec4 WVP_Pos = gWVP * vec4(Position, 1.0);
//通过把gl_Position的z设置成w,在光栅化进入fragment shader之前,skybox的z值会永远映射到1(即远平面),确保skybox深度检测永远fail但永远不被clip(因为我们吧glDepthFunc设置成了GL_LEQUAL)
gl_Position = WVP_Pos.xyww;
TexCoord0 = Position;
}
  1. 把object space的3D坐标当做3D texture坐标的索引值去查询纹理信息
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
skybox.vs
#version 330

layout (location = 0) in vec3 Position;

uniform mat4 gWVP;

out vec3 TexCoord0;

void main()
{
......
//因为cubemap这里默认cube和sphere都是以自身中心作为基准(个人认为这里的基准是可变的(建模工具可设定)),这样一来把object space的坐标信息通过光栅化后传递到fs中就可以作为texture coordinate来查询纹理信息
TexCoord0 = Position;
}

skybox.fs
#version 330

in vec3 TexCoord0;

out vec4 FragColor;

uniform samplerCube gCubemapTexture;

void main()
{
//这里的TexCoord0是经过光栅化后映射到了[-1,1]
FragColor = texture(gCubemapTexture, TexCoord0);
}

Skybox

Note:
“An interesting performance tip is to always render the skybox last (after all the other models). The reason is that we know that it will always be behind the other objects in the scene.”

Normal Mapping

在了解Normal Mapping之前不得不提Bump Mapping
下列关于Bump Mapping大部分内容来源:
OpenGL 法线贴图 切线空间 整理
Bump mapping
关于法线贴图, 法线, 副法线, 切线 的东东,看了很容易理解

What is Bump Mapping?
Bump mapping[1] is a technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations.

可以看出Bump Mapping是通过改变物体顶点法线来影响光照的效果,最终看起来有凹凸的效果(而非顶点之间真实的深度差),是一种欺骗眼睛的技术。

“Jim Blinn在1978发表了一篇名为:“Simulation of Wrinkled Surfaces”,提出了Bump Mapping这个东东。Bump Mapping通过一张Height Map记录各象素点的高度信息,有了高度信息,就可以计算HeightMap中当前象素与周围象素的高度差,这个高度差就代表了各象素的坡度,用这个坡度信息去绕动法向量,得到最终法向量,用于光照计算。坡度越陡,绕动就越大。”

Why Bump Mapping?
“如果要在几何体表面表现出凹凸不平的细节,那么在建模的时候就会需要很多的三角面,如果用这样的模型去实时渲染,出来的效果是非常好,只是性能上很有可能无法忍受。Bump Mapping不需要增加额外的几何信息,就可以达到增强被渲染物体的表面细节的效果,可以大大地提高渲染速度,因此得到了广泛的应用。”

What is Normal Mapping?
“Normal Mapping也叫做Dot3 Bump Mapping,它也是Bump Mapping的一种,区别在于Normal Mapping技术直接把Normal存到一张NormalMap里面,从NormalMap里面采回来的值就是Normal,不需要像HeightMap那样再经过额外的计算。”

“值得注意的是,NormalMap存的Normal是基于切线空间的,因此要进行光照计算时,需要把Normal,Light Direction,View direction统一到同一坐标空间中。”

这里不得不提的一个点就是切线空间(tangent space)
What is tangent space?
“ Tangent Space与World Space,View Space其实是同样的概念,均代表三维坐标系。在这个坐标系中,X轴对应纹理坐标的U方向,沿着该轴纹理坐标U线性增大。Y轴对应纹理坐标的V方向,沿着该轴纹理坐标V线性增大。Z轴则是UXV,垂直于纹理平面。”

Why do we need tangent space?
“为什么normal map里面存的法线信息是基于tangent sapce而不是基于local space呢?基于local space理论上也是可以的,但是这样的normal map只能用于一个模型,不同把这个normal map用于其他模型。比如说建模了一个人,并且生成了该模型基于local space的normal map, 如果我们建模同一个人,但是放的位置和角度和之前的不一样,那么之前的normal map就不能用了,因为local Space并不一样,但如果我们normal map里存的是tangent space的normal的话,就不存在这个问题,因为只要模型一样,模型上每个点的tangent space就是一样的,所谓以不变应万变。”

可以看出tangent space是针对顶点而言的。

How to get tangent space?
让我们看一下下图:
TangentSpaceCaculation
以下推导来源于:
Tutorial 26:Normal Mapping
TangentSpaceDeduce
TangentSpaceDeduce2
从上面而已看出通过三角形的顶点和纹理信息可以算出T和B

Note:
在实际开发中我们并非一定要手动写代码运算,比如”Open Asset Import Library就支持flag called ‘aiProcess_CalcTangentSpace’ which does exactly that and calculates the tangent vectors for us”

Normal Map也通过工具可以生成,比如3D Max, Maya, 教程里用的GNU Image Manipulation Program (GIMP)…….

当我们通过Normap Map获取得到normal值时,因为该normal值时位于tangent space下,所以我们必须对其进行坐标系转换,必须转换到world space后才参与光照计算。

而这个变换到世界坐标系的矩阵,可以通过tangent这个向量和顶点法线信息推导出来。

  1. 加载mesh时生成tangent数据,渲染时指定tangent数据位置
1
2
3
4
5
6
7
8
9
10
11
12
bool Mesh::LoadMesh(const std::string& Filename)
{
......

Assimp::Importer Importer;
//aiProcess_CalcTangentSpace指定生成tangent数据
const aiScene* pScene = Importer.ReadFile(Filename.c_str(), aiProcess_Triangulate |
aiProcess_GenSmoothNormals |
aiProcess_FlipUVs |
aiProcess_CalcTangentSpace);
......
}
1
2
3
4
5
6
7
8
void Mesh::Render()
{
......
//指定tangent数据读取方式
glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*)32); // tangent

......
}
  1. 将tangent和顶点法线转换到世界坐标系
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
layout (location = 0) in vec3 Position;
layout (location = 1) in vec2 TexCoord;
layout (location = 2) in vec3 Normal;
layout (location = 3) in vec3 Tangent;

uniform mat4 gWVP;
uniform mat4 gLightWVP;
uniform mat4 gWorld;

out vec4 LightSpacePos;
out vec2 TexCoord0;
out vec3 Normal0;
out vec3 WorldPos0;
out vec3 Tangent0;

void main()
{
......
//将tangent和顶点法线转换到世界坐标系
Normal0 = (gWorld * vec4(Normal, 0.0)).xyz;
Tangent0 = (gWorld * vec4(Tangent, 0.0)).xyz;
......
}
  1. 通过转换到世界坐标系的tangent和normal计算出bitangent(转换到世界坐标系后的B)(下面T代表tangent, N代表normal, B代表bitangent)

  2. 逆算出tangent space下normal map里的顶点法线值

  3. 通过算出的位于世界坐标系的TBN去转换tangent space下逆算后的normal map的顶点法线值,最终得到位于世界坐标的顶点法线

  4. 算出位于世界坐标系的顶点法线后,最后正常参与diffuse光照计算即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
(lighting.fs:132)
vec3 CalcBumpedNormal()
{
vec3 Normal = normalize(Normal0);
vec3 Tangent = normalize(Tangent0);
//得到位于TN平面且垂直于N的向量
Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal);
//通过叉乘得出垂直于T和N的B
vec3 Bitangent = cross(Tangent, Normal);
vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz;
//这里需要注意一点:
//"我们在描述色彩的时候,RGB三个通道的取值范围都是从零开始的。可是当我们尝试把一个任意的法线保存在一张纹理中的时候,会面临取负值的问题。因此我们要把法线做压缩。方法很简单,把XYZ每个轴上的法线投影长度进行N+1/2的运算。这样就把所有的法线压缩到了0和1的范围里。"
//所以这里通过列方法算回原有的顶点法线值
BumpMapNormal = 2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0);
vec3 NewNormal;
//由于位于世界坐标系的T,N,B都算出来了,所以可以构建一个TBN的矩阵去把normal map里的法线值转换到世界坐标系
mat3 TBN = mat3(Tangent, Bitangent, Normal);
NewNormal = TBN * BumpMapNormal;
//最后归一化算出来的顶点法线既得到了我们需要的位于世界坐标系的顶点法线,最后正常参与diffuse光照计算即可
NewNormal = normalize(NewNormal);
return NewNormal;
}

让我们看看Normal Map Texture
NormalMapTexture

See Normal Mapping(left) and Regular Mapping(right)
NormalMappingAndRegularMapping

Note:
A common use of this(normal mapping) technique is to greatly enhance the appearance and details of a low polygon model by generating a normal map from a high polygon model or height map.

高模normal map用于低模模型上,即不增加渲染负担又能增加渲染细节。

More:
下面内容来源
Parallax Mapping
当使用Normal Mapping技术时,并没有把视线方向考滤进去。在真实世界中,如果物体表面高低不平,当视线方向不同时,看到的效果也不相同。Parallax Mapping就是为了解决此问题而提出的。

Parallax Mapping首先在一篇名为“Detailed Shape Representation with Parallax Mapping”的文章中提出。它的基本思想如下图示(本图来自Parallax Mapping with Offset Limiting: A PerPixel Approximation of Uneven Surfaces)。在图示的视线方向,如果表面是真正的凹凸不平的,如real surfacer所示,那么能看到的是B点,因此用于采样法线的正确纹理坐是TB而不是TA。
ParallaxMappinge
因此,我们需要对纹理坐标作偏移,为了满足实时渲染的要求,采用了取近似偏移的方法(如下图示),这种近似的算法已经可以达到比较好的效果。具体的offset计算可以参考:“Parallax Mapping with Offset Limiting: A PerPixel Approximation of Uneven Surface”,里面有详细的讲解。
ParallaxMappinge2

Parallax Occlusion Mapping
Parallax Occlusion Mapping是对Parallax Mapping的改进,DirectX SDK中有个Sample专门讲这个,相关细节可以参看此Sample. Parallax Occlusion Mapping中实现了Self Shadow,还计算了比较精确的offset,复杂度比Parallax Mapping大,但是实现效果更好。

BillBoard And Geometry Shader

Geometry shader(Optional)
“The geometry shader sits logically right before primitive assembly and fragment shading.”

Receives as its input complete primitives as a collection of vertices, and these inputs are represented as array (Geometry shader接收完整图形的顶点集合,这些顶点集合在geometry shader中通过gl_in[]数组的方式访问)

gl_in的声明:

1
2
3
4
5
in gl_PerVertex {
vec4 gl_Position;
float gl_PointSize;
float gl_ClipDistance[];
}gl_in[];

Geometry Features:

  1. Producing Primitives
    They can have a different primitive type for their output than they do for their input. (EG: wireframe rendering, billboards and even interesting instancing effects)(Billboard效果见后面)

  2. Culling Geometry
    Selective culling (geometry shader通过对特定的gl_PrimitiveIDIn进行生成特定的primitive实现selective culling)
    “gl_PrimitiveIDIn is a geometry language input variable that holds the number of primitives processed by the shader since the current set of rendering primitives was started.”

  3. Geometry Amplification
    Produces more primitives on its output than it accepts on its input
    (can be used to implement fur shells or moderate tessellation – 因为可以对传入的primitive数据进行处理并生成多个primitive,所以能通过复制并改变primitive的信息数据来实现毛发等效果)
    Gl_MaxGeometryOutputVertices & glGetIntegerv(GL_MAX_GEOMETRY_OUTPUT_VERTICES)
    毛发效果(来源OpenGL红宝书第八版):
    Geometry_Shader_Fur

  4. Geometry Shader Instance
    Only runs the geometry shader and subsequent stages (rasterization) multiple times, rather than the whole pipeline (Geometry shader instancing draw call是通过运行多次geometry和rasterization和fragment来实现的)
    Geometry shader instancing is enabled in the shader by specifying the invocations layout qualifier

1
2
//gl_InvocationID identifies the invocation number assigned to the geometry shader invocation.
layout (triangles, invocations = 4) in; //invocations = 4 indicates that the geometry shader will be called 4 times for each input primitives
  1. Multiple Viewport Rendering
    gl_ViewportIndex (output variables available in the geometry shader that can redirect rendering into different regions of the framebuffer)
    gl_ViewportIndex is used to specify which set of viewport parameters will be used to perform the viewport transformation by OpenGL
    “ (Multiple viewport concept (多个视图窗口) – 这里主要是通过gl_ViewportIndex访问多个viewport,然后在geometry shader中通过指定primitive输出到特定的viewport来实现多个视图窗口)
    glViewportIndexedf() or glViewportIndexedfv() – specify how window x and y coordinates are generated from clip coordinates
    glDepthRangeIndexed() – specify how the window z coordinate is generated
    效果展示(这里展示的OpenGL红宝书第八版的例子):
    Multiple_Viewports

  2. Layer Rendering
    It is also possible to use a 2D array texture as a color attachment and render into the slices of the array using a geometry shader (传入2D的纹理数组数据当做color attachment,通过geometry shader把传入的2D纹理数组信息去渲染多个slices)
    A restriction exits when using layered attachments to framebuffer: (使用layered attachment到framebuffer的规则):
    All the attachments of that framebuffer must be layered (framebuffer的所有attachment都必须是layered)
    Also, all attachments of a layered framebuffer must be of the same type (所有attach到layered framebuffer的attachment必须是同样类型)
    gl_Layer – built in variable in geometry shader – that is used to specify the zero-based index of the layer into which rendering will be directed
    可实现的效果好比:
    Cube-Map
    添加cube_map texture为color attachment到framebuffer中
    cube-map texture(2D texture)这里会被划分成六个layer的array texture
    通过instanced geometry shader生成六个faces(对应六个layer),通过gl_InvocationID和gl_Layer访问六个faces并做相应的projection matrices运算实现Cube_Map Face的效果

  3. Advanced Transform Feedback
    这里首先要了解下什么是Transform Feeback?
    Transform feedback can be considered a stage of the OpenGL pipeline that sits after all of the vertex-processing stages and directly before primitive assembly and rasterization. Transform feedback captures vertices as they are assembled into primitives and allow some or all of their attributes to be recorded into buffer objects. (Transform feedback发生在所有顶点运算阶段之后(所以如果geometry shader打开了,transform feedback就发生在geometry shader之后,相反是在vertex shader之后),在primitive assembly和光栅化之前。Transform feedback可以保存顶点的一些属性信息用于下一次的运算。)

Why do we need transform feedback?
“DirectX10 introduced a new feature known as Stream Output that is very useful for implementing particle systems. OpenGL followed in version 3.0 with the same feature and named it Transform Feedback. The idea behind this feature is that we can connect a special type of buffer (called Transform Feedback Buffer right after the GS (or the VS if the GS is absent) and send our transformed primitives to it. In addition, we can decide whether the primitives will also continue on their regular route to the rasterizer. The same buffer can be connected as a vertex buffer in the next draw and provide the vertices that were output in the previous draw as input into the next draw. This loop enables the two steps above to take place entirely on the GPU with no application involvement (other than connecting the proper buffers for each draw and setting up some state).”

从上面可以看出,transform feedback可以帮助我们在构建primitive之前保存顶点相关的一些信息参与到下一次draw的运算且不用参与到Clipping,rasterizer和FS。最重要的是所有这一切都发生在GPU上,不需要从GPU上copy数据到CPU上做运算。

大致情况如下图:
TransformFeedbackFlowchart

了解一些相关概念:
Transform Feedback Objects:
“The state required to represent transform feedback is encapsulated into a
transform feedback object.”(transform feedback objects主要是存储跟transform feedback相关的一些状态。比如:哪一个buffer绑定到了transform feedback buffer的binding point)

Transform Feedback Buffer:
vertex shader或geometry shader中获取来的信息,这里的TFB是指通过glBindBufferBase之类方法后被绑定到Tansform Feedback Objects上的buffer

glBindBufferBase调用的时候需要指定index作为binding point,如果我们想要把Transform Feedback Buffer的数据存储在多个buffer的时候我们可以把多个buffer绑定到不同的binding point上,然后通过glTransformFeedbackVaryings传入的参数格式决定我们生成的数据是如何写入到各个buffer里的。

具体的glTransformFeedbackVaryings如何配置决定数据是如何写入到各个buffer的参见OpenGL红宝书Configuring Transform Feedback Varyings

因为粒子效果用到了Billboard来展示,所以在了解Particle System之前,我们先来看看Billboard是如何通过GS实现的:
Billboard - “A billboard is a quad which always faces the camera. “

  1. Before a geometry shader may be linked, the input primitive type, output primitive type, and the maximum number of vertices that is might produce must be specified (在链接geometry shader之前,我们必须先定义geometry shader的输入输出类型)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
version 330                                                                        
//指明GS中传入的数据是以点为单位 layout(points) in;
//指明GS将输出的primitive是triangle_strip
layout(triangle_strip) out;
//指明GS生成的最大顶点数量是4,因为这里我们只需4个顶点组成一个quad即可
layout(max_vertices = 4) out;

......

void main()
{
......
}
//e.g:
//layout (input primitive type) in;
//layout (output primitive type, max_vertices = number) out; (这里的max_//vertices会遇到一个硬件限制所支持的max_vertices的最大值--超出最大值后program //link会出错,通过在program link后调用glGetProgramiv() with GL_INFO_LOG_LENGTH //parameter可以得program link的出错信息,shader compile的log同理)
  1. 利用传递进来的顶点primitive数据生成新的面向camera的primitive数据
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
#version 330                                                                        

layout(points) in;
layout(triangle_strip) out;
layout(max_vertices = 4) out;

uniform mat4 gVP;
uniform vec3 gCameraPos;

out vec2 TexCoord;

void main()
{
//通过gl_in我们可以在GS中访问传入的primitive的顶点数据,这里因为我们指定了传入的layout(points) in,所以这里只需访问gl_in[0]即可
//通过计算出面向camera时quad所在的物体坐标系信息,我们在这基础上对顶点数据做偏移,这样算出来的顶点数据生成的primitive始终面向Camera
vec3 Pos = gl_in[0].gl_Position.xyz;
vec3 toCamera = normalize(gCameraPos - Pos);
vec3 up = vec3(0.0, 1.0, 0.0);
vec3 right = cross(toCamera, up);

Pos -= (right * 0.5);
//这里之所以只传gVP而非gMVP是因为我们在创建顶点数据的时候就是传递的世界坐标信息
gl_Position = gVP * vec4(Pos, 1.0);
TexCoord = vec2(0.0, 0.0);
EmitVertex();

Pos.y += 1.0;
gl_Position = gVP * vec4(Pos, 1.0);
TexCoord = vec2(0.0, 1.0);
//通过调用EmitVertex指定利用上面的数据生成新的vertex加入到最终的primitive构造中
EmitVertex();

Pos.y -= 1.0;
Pos += right;
gl_Position = gVP * vec4(Pos, 1.0);
TexCoord = vec2(1.0, 0.0);
EmitVertex();

Pos.y += 1.0;
gl_Position = gVP * vec4(Pos, 1.0);
TexCoord = vec2(1.0, 1.0);
EmitVertex();
//通过调用EndPrimitive指定前面生成的顶点数据作为一个新的primitive
EndPrimitive();
}

void BillboardList::CreatePositionBuffer()
{
Vector3f Positions[NUM_ROWS * NUM_COLUMNS];
//创建顶点数据的时候即传递的世界坐标
for (unsigned int j = 0 ; j < NUM_ROWS ; j++) {
for (unsigned int i = 0 ; i < NUM_COLUMNS ; i++) {
Vector3f Pos((float)i, 0.0f, (float)j);
Positions[j * NUM_COLUMNS + i] = Pos;
}
}

......
}
EmitVertex() - produces a new vertex at the output of the geometry shader. Each time it is called, a vertex is appended to the end of the current strip (将新的vertex加入到primitive的队列)
EndPrimitive() - breaks the current strip and signals OpenGL that a new strip should be started the next time EmitVertex() is called (将之前所加入的vertex算作一个primitive的信息,通知OpenGL开始下一个primitive的构造)
Note:
When the geometry shader exits, the current primitive is ended implicitly (如果geometry shader结束了,那么当前还没有调用EndPrimitive()的primitive将视作结束)
When EndPrimitive() is called, any incomplete primitives will simply be discarded (当EndPrimitive()被调用的时候,数据不完全的primitive将被抛弃 -- 不调用这个方法的primitive相当于culling掉)
  1. 在FS中利用GS中生成的纹理坐标映射纹理信息并Cull掉纹理图片中黑色的部分
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#version 330                                                                        

uniform sampler2D gColorMap;

in vec2 TexCoord;
out vec4 FragColor;

void main()
{
FragColor = texture2D(gColorMap, TexCoord);
//Cull掉纹理图片中黑色的部分
if (FragColor.r == 0 && FragColor.g == 0 && FragColor.b == 0) {
discard;
}
}

Final Effect:
GSBillboard

接下来我们看看通过Transform Feedback实现Particle System的步骤:

  1. 生成Transform Feedback Objets和用于存储数据的buffer,并将buffer绑定到特定Transform Feedback Objects上
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
bool ParticleSystem::InitParticleSystem(const Vector3f& pos)
{
Particle Particles[MAX_PARTICLES];
ZERO_MEM(Particles);
//Particle System最初的那个发射点信息
Particles[0].Type = PARTICLE_TYPE_LAUCHER;
Particles[0].Pos = pos;
Particles[0].Vel = Vector3f(0.0f, 0.0001f, 0.0f);
Particles[0].LifetimeMillis = 0.0f;
//生成2个transform feedback object和两个buffer
//"OpenGL enforces a general limitation that the same resource cannot be bound for both input and output in the same draw call. //This means that if we want to update the particles in a vertex buffer we actually need two transform feedback buffers and toggle between them. //On frame 0 we will update the particles in buffer A and render the particles from buffer B and on frame 1 we will update the particles in buffer B and render the particles from buffer "
//从上面可以看出我们之所以生成两个Transform Feedback Object和两个buffer是因为OpenGL要求我们不能在一次draw call里把同一个resource(这里指TFB和buffer)即作为输入也作为输出
//所以我们想要通过fist pass去记录一些数据信息,然后再将其渲染到屏幕上,我们必须通过切换两个TFO和buffer来实现
//记录到A的时候用B数据渲染,记录到B的时候通过A数据来渲染
glGenTransformFeedbacks(2, m_TransformFeedback);

glGenBuffers(2, m_ParticleBuffer);

for(unsigned int i = 0; i < 2; i++)
{
//绑定TFO,使得接下来在TFB上的操作是跟特定TFO(transform feedback object)挂钩的
glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, m_TransformFeedback[i]);
glBindBuffer(GL_ARRAY_BUFFER, m_ParticleBuffer[i]);
glBufferData(GL_ARRAY_BUFFER, sizeof(Particles), Particles, GL_DYNAMIC_DRAW);
//绑定对应buffer的对应的TFO上
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, m_ParticleBuffer[i]);
}

.......
}
  1. 配置Transform Feedback Varyings(指定我们会如何在GS中去如何记录和存储哪些信息)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
bool PSUpdateTechnique::Init()
{
......

const GLchar* Varyings[4];
Varyings[0] = "Type1";
Varyings[1] = "Position1";
Varyings[2] = "Velocity1";
Varyings[3] = "Age1";
//在链接Update Shader之前,我们需要指明TFB会如何去记录和存储数据信息
//这里指明了我们会在GS中去记录Varyings中四个变量的数据信息到TFB中
//GL_INTERLEAVED_ATTRIBS表示我们会把所有的attribute数据都记录到一个buffer里
glTransformFeedbackVaryings(m_shaderProg, 4, Varyings, GL_INTERLEAVED_ATTRIBS);

if (!Finalize()) {
return false;
}

......

return true;
}
  1. 配置设定一些Update Shader和Billboard Shader的一些数据信息
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
bool ParticleSystem::InitParticleSystem(const Vector3f& pos)
{
......

if(!m_UpdateTechnique.Init())
{
assert(false);
return false;
}

m_UpdateTechnique.Enable();

m_UpdateTechnique.SetRandomTextureUnit(RANDOM_TEXTURE_UNIT_INDEX);
m_UpdateTechnique.SetLauncherLifetime(100.0f);
m_UpdateTechnique.SetShellLifetime(10000.0f);
m_UpdateTechnique.SetSecondaryShellLifetime(2500.0f);

if(!m_RandomTexture.InitRandomTexture(1000))
{
assert(false);
return false;
}

m_RandomTexture.Bind(RANDOM_TEXTURE_UNIT);

if(!m_BillboardTechnique.Init())
{
assert(false);
return false;
}

m_BillboardTechnique.Enable();

m_BillboardTechnique.SetColorTextureUnit(COLOR_TEXTURE_UNIT_INDEX);

m_BillboardTechnique.SetBillboardSize(0.01f);

m_PTexture = new Texture(GL_TEXTURE_2D, "../Content/fireworks_red.jpg");

if(!m_PTexture->Load())
{
assert(false);
return false;
}

return GLCheckError();
}
  1. 一次Draw Call,两个Pass,一个Pass去更新TFB里的数据,一个Pass去渲染TFB里的数据
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
static void RenderPass()
{
......
gParticleSystem.Render(deltatimemillis, p.GetVPTrans(), pGameCamera->GetPos());
......
}

void ParticleSystem::Render(int deltatimemillis, const Matrix4f& vp, const Vector3f& camerapos)
{
m_Time += deltatimemillis;
//因为Shader里会去模拟真实重力和粒子移动效果,所以Update的时候需要delta time
UpdateParticles(deltatimemillis);

RenderParticles(vp, camerapos);
//这里就是我们之前说的通过切换两个TFO和buffer来实现一边更新TFB一边渲染TFB的效果
m_CurrVB = m_CurrTFB;
m_CurrTFB = (m_CurrTFB + 1) & 0x1;
}


void ParticleSystem::UpdateParticles(int deltamillis)
{
m_UpdateTechnique.Enable();
m_UpdateTechnique.SetTime(m_Time);
m_UpdateTechnique.SetDeltaTimeMillis(deltamillis);

m_RandomTexture.Bind(RANDOM_TEXTURE_UNIT);
//这里之所以调用glEnable(GL_RASTERIZER_DISCARD)是因为在Update Pass时,我们不需要进入RS阶段,所以这里关闭了Rasterizer
glEnable(GL_RASTERIZER_DISCARD);
//Update Pass的时候,我们把m_ParticleBuffer[m_CurrVB]作为数据输入
glBindBuffer(GL_ARRAY_BUFFER, m_ParticleBuffer[m_CurrVB]);
//通过绑定m_TransformFeedback[m_CurrTFB]到GL_TRANSFORM_FEEDBACK,我们把GS里生成的数据存储到绑定了m_TransformFeedback[m_CurrTFB]的m_ParticleBuffer[m_CurrTFB] buffer里
//这里就是我们说的A作为输入,B作为输出
glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, m_TransformFeedback[m_CurrTFB]);

glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glEnableVertexAttribArray(3);

glVertexAttribPointer(0, 1, GL_FLOAT, GL_FALSE, sizeof(Particle), 0); // type
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)4); // position
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)16); // velocity
glVertexAttribPointer(3, 1, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)28); // lifetime
//激活Transform Feedback,指明GS输出的primitive type
glBeginTransformFeedback(GL_POINTS);

if(m_IsFirst)
{
//只有第一次我们是知道我们会draw的Point数量(因为particle发射器只有一个)
glDrawArrays(GL_POINTS, 0, 1);

m_IsFirst = false;
}
else
{
//第二次以后,顶点数量是未知的,因为GS是可以生成多个顶点数据的。
//"The system automatically tracks the number of vertices for us for each buffer and later uses that number internally when the buffer is used for input. "
//从上面可知,transfor feedback buffer里的顶点数量系统会自己去track
//我们只需通知用哪一个TFB绑定的buffer作为数据输入即可
glDrawTransformFeedback(GL_POINTS, m_TransformFeedback[m_CurrVB]);
}

glEndTransformFeedback();

glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glDisableVertexAttribArray(3);
}

void ParticleSystem::RenderParticles(const Matrix4f& vp, const Vector3f& camerapos)
{
m_BillboardTechnique.Enable();
m_BillboardTechnique.SetCameraPosition(camerapos);
m_BillboardTechnique.SetVP(vp);
m_PTexture->Bind(COLOR_TEXTURE_UNIT);
//第二Pass的时候,我们需要渲染出图像,所以需要开启Rasterizer
glDisable(GL_RASTERIZER_DISCARD);
//这里通过使用之前记录到m_ParticleBuffer[m_CurrTFB]的数据作为输入进行渲染
glBindBuffer(GL_ARRAY_BUFFER, m_ParticleBuffer[m_CurrTFB]);

glEnableVertexAttribArray(0);

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)4); // position
//绘制并渲染m_TransformFeedback[m_CurrTFB]所绑定的m_ParticleBuffer[m_CurrTFB]里的数据
glDrawTransformFeedback(GL_POINTS, m_TransformFeedback[m_CurrTFB]);

glDisableVertexAttribArray(0);
}
  1. 接下来就是看Update Shader是如何去更新模拟粒子的生成和重力效果的,billboard Shader是如何将生成的顶点粒子数据渲染到屏幕上的
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
ps_update.vs
#version 330

layout (location = 0) in float Type;
layout (location = 1) in vec3 Position;
layout (location = 2) in vec3 Velocity;
layout (location = 3) in float Age;

out float Type0;
out vec3 Position0;
out vec3 Velocity0;
out float Age0;

//VS里我们只是正常获取传入的顶点相关信息
void main()
{
Type0 = Type;
Position0 = Position;
Velocity0 = Velocity;
Age0 = Age;
}

ps_update.gs
#version 330

layout(points) in;
layout(points) out;
layout(max_vertices = 30) out;

in float Type0[];
in vec3 Position0[];
in vec3 Velocity0[];
in float Age0[];

out float Type1;
out vec3 Position1;
out vec3 Velocity1;
out float Age1;

uniform float gDeltaTimeMillis;
uniform float gTime;
uniform sampler1D gRandomTexture;
uniform float gLauncherLifetime;
uniform float gShellLifetime;
uniform float gSecondaryShellLifetime;

#define PARTICLE_TYPE_LAUNCHER 0.0f
#define PARTICLE_TYPE_SHELL 1.0f
#define PARTICLE_TYPE_SECONDARY_SHELL 2.0f

//获取随机方向的方法,我们在gRandomTexture里随机写入了1D的数据信息
vec3 GetRandomDir(float TexCoord)
{
vec3 Dir = texture(gRandomTexture, TexCoord).xyz;
Dir -= vec3(0.5, 0.5, 0.5);
return Dir;
}

void main()
{
float Age = Age0[0] + gDeltaTimeMillis;

if (Type0[0] == PARTICLE_TYPE_LAUNCHER) {
//如果粒子发射器life time达到,我们在粒子发射器的位置生成新的粒子(随机方向)
if (Age >= gLauncherLifetime) {
Type1 = PARTICLE_TYPE_SHELL;
Position1 = Position0[0];
vec3 Dir = GetRandomDir(gTime/1000.0);
Dir.y = max(Dir.y, 0.5);
Velocity1 = normalize(Dir) / 20.0;
Age1 = 0.0;
EmitVertex();
EndPrimitive();
Age = 0.0;
}
//重置并生成新的粒子发射器使其持续生成粒子
Type1 = PARTICLE_TYPE_LAUNCHER;
Position1 = Position0[0];
Velocity1 = Velocity0[0];
Age1 = Age;
EmitVertex();
EndPrimitive();
}
else {
//当当前粒子不是粒子发射器的时候,我们通过传入的delta time去更新粒子的位置持续时间等相关信息
float DeltaTimeSecs = gDeltaTimeMillis / 1000.0f;
float t1 = Age0[0] / 1000.0;
float t2 = Age / 1000.0;
vec3 DeltaP = DeltaTimeSecs * Velocity0[0];
vec3 DeltaV = vec3(DeltaTimeSecs) * (0.0, -9.81, 0.0);
if (Type0[0] == PARTICLE_TYPE_SHELL) {
//若当前粒子是第一次粒子发射器发散出的粒子
//我们通过粒子的持续时间决定是否进入第二次粒子炸裂阶段
if (Age < gShellLifetime) {
//还没达到炸裂时间点,我们仅仅更新该粒子的位置持续时间等相关信息
Type1 = PARTICLE_TYPE_SHELL;
Position1 = Position0[0] + DeltaP;
Velocity1 = Velocity0[0] + DeltaV;
Age1 = Age;
EmitVertex();
EndPrimitive();
}
else {
//当由粒子发射器发散出的粒子达到炸裂时间点的时候,我们通过该粒子所在位置随机生成10个随机方向的粒子
for (int i = 0 ; i < 10 ; i++) {
Type1 = PARTICLE_TYPE_SECONDARY_SHELL;
Position1 = Position0[0];
vec3 Dir = GetRandomDir((gTime + i)/1000.0);
Velocity1 = normalize(Dir) / 20.0;
Age1 = 0.0f;
EmitVertex();
EndPrimitive();
}
}
}
else {
//进入到第二次发散阶段的粒子,如果还在生命时间内,我们就更新起位置持续时间等相关信息,否则直接略过该粒子(即粒子消亡)
if (Age < gSecondaryShellLifetime) {
Type1 = PARTICLE_TYPE_SECONDARY_SHELL;
Position1 = Position0[0] + DeltaP;
Velocity1 = Velocity0[0] + DeltaV;
Age1 = Age;
EmitVertex();
EndPrimitive();
}
}
}
}

ps_update.fs
#version 330
//因为Update Shader只负责update粒子信息,不许要渲染,所以这里ps_update.fs为空
void main()
{
}

//最终的通过transform feedback生成的粒子信息会通过billboard一一渲染出来
billboard.vs
#version 330
//这里只需要粒子的位置信息即可
layout (location = 0) in vec3 Position;

void main()
{
gl_Position = vec4(Position, 1.0);
}

billboard.gs
#version 330

layout(points) in;
layout(triangle_strip) out;
layout(max_vertices = 4) out;

uniform mat4 gVP;
uniform vec3 gCameraPos;
uniform float gBillboardSize;

out vec2 TexCoord;

void main()
{
vec3 Pos = gl_in[0].gl_Position.xyz;
vec3 toCamera = normalize(gCameraPos - Pos);
vec3 up = vec3(0.0, 1.0, 0.0);
vec3 right = cross(toCamera, up) * gBillboardSize;

Pos -= right;
gl_Position = gVP * vec4(Pos, 1.0);
TexCoord = vec2(0.0, 0.0);
EmitVertex();

Pos.y += gBillboardSize;
gl_Position = gVP * vec4(Pos, 1.0);
TexCoord = vec2(0.0, 1.0);
EmitVertex();

Pos.y -= gBillboardSize;
Pos += right;
gl_Position = gVP * vec4(Pos, 1.0);
TexCoord = vec2(1.0, 0.0);
EmitVertex();

Pos.y += gBillboardSize;
gl_Position = gVP * vec4(Pos, 1.0);
TexCoord = vec2(1.0, 1.0);
EmitVertex();

EndPrimitive();
}

billboard.fs
#version 330

uniform sampler2D gColorMap;

in vec2 TexCoord;
out vec4 FragColor;

void main()
{
FragColor = texture2D(gColorMap, TexCoord);
//过滤掉粒子纹理图片里较白的部分
if (FragColor.r >= 0.9 && FragColor.g >= 0.9 && FragColor.b >= 0.9) {
discard;
}
}

通过glDEBugger我们可以查看到Transform Feedback生成的数据信息:
TransformFeedbackBufferData

Final Effect:
ParticleSystem

接下来让我们来看看Transform Feedback的更多高级使用:
Multiple Output Steams
Multiple streams of vertices can be declared as outputs in the geometry shader (通过stream我们可以把一些额外需要保存的信息保存到特定的stream里便于transform feedback buffer去访问并进行进一步的处理)

Using the stream layout qualifier – this layout qualifier may be applied globally, to an interface block, or to a single output declaration

Each stream is numbered, starting from zero, max number of streams – GL_MAX_VERTEX_STREAMS

When the stream number is given at global scope, all subsequently declared geometry shader outputs become members of that stream until another output stream layout qualifier is specified
See how to declaration stream:
StreamDeclaration

Multiple output stream’s built in GLSL functions:
EmitStreamVertex(int stream)
EndStreamVertex(int stream)

glTransformFeedbackVaryings() – tell OpenGL how those streams are mapped into transform feedback buffer (告诉OpenGL各个stream是怎么映射到transform feedback buffer的)

When multiple streams are active, it is required that variables associated with a single stream are not written into the same buffer binding point as those associated with any other stream(当多个stream声明激活的时候,我们必须将每一个stream写到不同的buffer binding point里)

gl_NextBuffer is used to signal that the following output variables are to be recorded into the buffer object bound to the next transform feedback binding point (gl_NexBuffer告诉OpenGL后面的数据将绑定到下一个transform feedback buffer)

if rasterization & fragment shader are enabled, the output variables belonging to stream 0 will be used to form primitives for rasterization and will be passed into the fragment shader. Output variables belonging to other streams will not be visible in the fragment shader and if transform feedback is not active, they will be discarded (这里需要注意,一旦rasterization 和 fragment shader被开启或者transform feedback没有被开启,那么geometry shader里面指定的out变量只有属于stream 0的才会被进行处理,其他都会被抛弃)

Note:
When multiple output streams are used in a geometry shader, they must all have points as the primitive type (注意,当multiple output streams被开启时,geometry shader必须指定输出类型为point,当first pass的时候geometry shader指定输出类型为point,second pass的时候geometry shader可以针对第一次transform feedback记录的point数据进行处理输出triangle等)

Primitive Queries
Reason:
Geometry shader can emit a variable number of vertices per invocation (因为geometry shader会扩展出很多primitive和vertices,我们在访问一些跟transform feedback buffer相关的数据的时候就不那么直接 – 这里要提一下没有geometry shader,vertex shader结合transform feeback buffer的使用是一对一的输出,而geometry shader不一样,会有一堆多的primitive,vertices的输出)

Problem:
The number of vertices recorded into transform feedback buffers when a geometry shader is present may not be easy to infer

Solution:
Two types of queries are available to count both the number of primitives the geometry shader generates, and the number of primitives actually written into the transform feedback buffers(通过Primitive Queries我们可以得知geometry shader的primitives,vertices生成数量和实际被写入transform feedback buffer的primitive,vertices数量)

GL_PRIMITIVES_GENERATED – query counts the number of vertices output by the geometry shader – valid at any time
&
GL_TRANSFORM_FEEDBACK_PRIMITIVES_WRITTEN – query counts the number of vertices actually written into a transform feedback buffer – only valid when transform feedback is active

Due to geometry shader supports multiple transform feedback streams, primitive queries are indexed (因为geometry shader支持multiple transform feedback streams,所以primitive queries也是indexed的)

3D Picking

“The ability to match a mouse click on a window showing a 3D scene to the primitive (let’s assume a triangle) who was fortunate enough to be projected to the exact same pixel where the mouse hit is called 3D Picking.”

3D Picking实现的关键在于通过类似Shadow map的方式,把所有的primitive信息写入到一张picking texture里,当mouse点击的时候我们去查询所点击的primitive信息然后把该primitive渲染成我们想要的颜色即可。

实现步骤:
First pass(picking pass) – 利用gDrawIndex, gObjectIndex, Primitive Index生成picking texture

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
bool PickingTexture::Init(unsigned int WindowWidth, unsigned int WindowHeight)
{
// Create the FBO
glGenFramebuffers(1, &m_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, m_fbo);

// Create the texture object for the primitive information buffer
glGenTextures(1, &m_pickingTexture);
glBindTexture(GL_TEXTURE_2D, m_pickingTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, WindowWidth, WindowHeight,
0, GL_RGB, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D,
m_pickingTexture, 0);

// Create the texture object for the depth buffer
glGenTextures(1, &m_depthTexture);
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, WindowWidth, WindowHeight,
0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D,
m_depthTexture, 0);

// Disable reading to avoid problems with older GPUs
glReadBuffer(GL_NONE);

glDrawBuffer(GL_COLOR_ATTACHMENT0);

// Verify that the FBO is correct
GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER);

if (Status != GL_FRAMEBUFFER_COMPLETE) {
printf("FB error, status: 0x%x\n", Status);
return false;
}

// Restore the default framebuffer
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);

return GLCheckError();
}

void PickingTexture::EnableWriting()
{
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_FBO);
}

通过类似生成Shaow map的方式,我们生成一个FRAMEBUFFER m_fbo,然后通过分别attach m_depthTexture和m_pickingTexture到GL_COLOR_ATTACHMENT0和GL_DEPTH_COMPONENT上,紧接着我们通过指定绘制到m_fbo的GL_COLOR_ATTACHMENT0(即我们attach的那个)上,这样一来当我们渲染到m_fbo的时候,attach到GL_COLOR_ATTACHMENT0的那个color texture就会得到渲染的picking texture,最后在我们我们在渲染之前需要指定m_fbo作为渲染到的FRAMEBUFFER。这样一来我们通过Picking Technique就能得到Picking texture。
这里也会生成depth texture,但我们并不会用到,指定生成depth texture的原因如下:
“By combining a depth buffer in the process we guarantee that when several primitives are overlapping the same pixel we get the index of the top-most primitive (closest to the camera). “(注意我们需要结合depth buffer来保证我们生成的picking texture保存的primitive信息是离摄像机最近的)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
picking_technique.cpp
#include "picking_technique.h"
#include "ogldev_util.h"

......

void PickingTechnique::SetWVP(const Matrix4f& WVP)
{
glUniformMatrix4fv(m_WVPLocation, 1, GL_TRUE, (const GLfloat*)WVP.m);
}

void PickingTechnique::DrawStartCB(uint DrawIndex)
{
glUniform1ui(m_drawIndexLocation, DrawIndex);
}

void PickingTechnique::SetObjectIndex(uint ObjectIndex)
{
GLExitIfError;
glUniform1ui(m_objectIndexLocation, ObjectIndex);
// GLExitIfError;
}

picking.vs
#version 330

layout (location = 0) in vec3 Position;

uniform mat4 gWVP;

void main()
{
gl_Position = gWVP * vec4(Position, 1.0);
}

picking.fs
#version 330

uniform uint gDrawIndex;
uniform uint gObjectIndex;

out vec3 FragColor;

void main()
{
FragColor = vec3(float(gObjectIndex), float(gDrawIndex),float(gl_PrimitiveID + 1));
}

理解上述代码,我们首先需要看看我们存储在picking texture里的信息组成。
从picking.fs中可以看出我们存储在piking texture里的颜色信息主要是由gObjectIndex,gDrawIndex,gl_PrimitiveID组成。
当我们去渲染spider mesh的时候,我们通过调用void Mesh::Render(IRenderCallbacks* pRenderCallbacks)传入了实现了DrawStartCB回调方法的类传入了shader里gObjectIndex的值。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
void Mesh::Render(IRenderCallbacks* pRenderCallbacks)
{
......

for (unsigned int i = 0 ; i < m_Entries.size() ; i++) {
glBindBuffer(GL_ARRAY_BUFFER, m_Entries[i].VB);
.......
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_Entries[i].IB);

.......

if (pRenderCallbacks) {
pRenderCallbacks->DrawStartCB(i);
}

GLExitIfError;
glDrawElements(GL_TRIANGLES, m_Entries[i].NumIndices, GL_UNSIGNED_INT, 0);
}

......
}

这里传入的是spider mesh count的索引,即gObjectIndex代表mesh count的索引(这里的spider由19个mesh组成,通过open3mod可以查看到)。
SpiderMeshTree
接下来当我们渲染两个spider的时候,我们把Object index(即这里spider的数量)作为了gDrawIndex传入了shader。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
static void PickingPhase()
{
.......

gPickingTexture.EnableWriting();

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

gPickingEffect.Enable();

for (uint i = 0 ; i < (int)ARRAY_SIZE_IN_ELEMENTS(gWorldPos) ; i++) {
p.WorldPos(gWorldPos[i]);
gPickingEffect.SetObjectIndex(i);
gPickingEffect.SetWVP(p.GetWVPTrans());
gPSpider->Render(&gPickingEffect);
}

gPickingTexture.DisableWriting();
}

最后gl_PrimitiveID是OpenGL build-in的变量,”This is a running index of the primitives which is automatically maintained by the system.”(代表我们绘制的primitive索引值,每一次draw都会从0开始。)
这里就引出了一个问题。我们如何得知我们渲染到picking texture里的primitive值0是指background还是被object遮挡的primitive了。
这也就是为什么我们在写入picking texture的时候,gl_PrimitiveID + 1的原因了。这样一来凡是primitive为0的都是background。
PickingTexture

Render pass – 通过映射mouse click的pixel到picking texture,会得到鼠标点击到的gObjectIndex,gDrawIndex,gl_PrimitiveID信息,然后通过这些信息,我们把该点击的primitive通过simple color shader渲染成红色,然后再正常渲染两个spider即可。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
void Mesh::Render(unsigned int DrawIndex, unsigned int PrimID)
{
assert(DrawIndex < m_Entries.size());

......

glBindBuffer(GL_ARRAY_BUFFER, m_Entries[DrawIndex].VB);

......

glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_Entries[DrawIndex].IB);

glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, (const GLvoid*)(PrimID * 3 * sizeof(GLuint)));

......
}

PickingTexture::PixelInfo PickingTexture::ReadPixel(unsigned int x, unsigned int y)
{
glBindFramebuffer(GL_READ_FRAMEBUFFER, m_FBO);
glReadBuffer(GL_COLOR_ATTACHMENT0);
PixelInfo pixel;
glReadPixels(x, y, 1, 1, GL_RGB, GL_FLOAT, &pixel);
glReadBuffer(GL_NONE);

glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);

return pixel;
}

void RenderPhase()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

Pipeline p;
p.Scale(0.1f, 0.1f, 0.1f);
p.SetCamera(m_pGameCamera->GetPos(), m_pGameCamera->GetTarget(), m_pGameCamera->GetUp());
p.SetPerspectiveProj(m_persProjInfo);

// If the left mouse button is clicked check if it hit a triangle
// and color it red
if (m_leftMouseButton.IsPressed) {
PickingTexture::PixelInfo Pixel = m_pickingTexture.ReadPixel(m_leftMouseButton.x,
WINDOW_HEIGHT - m_leftMouseButton.y - 1);
if (Pixel.PrimID != 0) {
m_simpleColorEffect.Enable();
p.WorldPos(m_worldPos[(uint)Pixel.ObjectID]);
m_simpleColorEffect.SetWVP(p.GetWVPTrans());
// Must compensate for the decrement in the FS!
m_pMesh->Render((uint)Pixel.DrawID, (uint)Pixel.PrimID - 1);
}
}

// render the objects as usual
m_lightingEffect.Enable();
m_lightingEffect.SetEyeWorldPos(m_pGameCamera->GetPos());

for (unsigned int i = 0 ; i < ARRAY_SIZE_IN_ELEMENTS(m_worldPos) ; i++) {
p.WorldPos(m_worldPos[i]);
m_lightingEffect.SetWVP(p.GetWVPTrans());
m_lightingEffect.SetWorldMatrix(p.GetWorldTrans());
m_pMesh->Render(NULL);
}
}

simple_color.vs
#version 330

layout (location = 0) in vec3 Position;

uniform mat4 gWVP;

void main()
{
gl_Position = gWVP * vec4(Position, 1.0);
}

simple_color.fs
#version 330

layout(location = 0) out vec4 FragColor;

void main()
{
FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}

这里去读取picking texture里的信息的时候,要注意的一点是,鼠标获取得到的坐标信息和我们去查询texture的坐标系是不一致的,这里需要转换。
一下来源于Glut Mouse Coordinates
“In “window” coordinate, the origin (0,0) is top left of the viewport.In OpenGL the origin is bottom left of the viewport. When you click glut give you the window coordinate. All you have to do is calculate this: y = height_of_viewport - y - 1.

Edit: Notice that you compare a screen coordinate (mouse click) with an object coordinate (your rectangle). This is fine if you use a perspective projection like this glOrtho(0,0,viewport_width,viewport_height). If not you need to call gluProject to map each corner of your rectangle in screen coordinate. “
从上面可以看出,glut获取的mouse坐标系是以左上角为(0,0)点。而OpenGL viewport的(0,0)点时左下角,所以我们需要通过下列方式去转换映射点:

1
PickingTexture::PixelInfo Pixel = m_pickingTexture.ReadPixel(m_leftMouseButton.x, WINDOW_HEIGHT - m_leftMouseButton.y - 1);

在得到正确的映射值后,我们将查询到的gObjectIndex,gDrawIndex,gl_PrimitiveID当做信息传入void Mesh::Render(unsigned int DrawIndex, unsigned int PrimID)去指定渲染特定mesh的特定primitive成红色。这里要注意的一点是因为mesh里的primitive索引是从0开始的,但我们之前存储的primitive index是+1的,所以这里我们需要恢复原有正确的值去指定渲染正确的primitive。

1
2
// Must compensate for the decrement in the FS!
m_pMesh->Render((uint)Pixel.DrawID, (uint)Pixel.PrimID - 1);

这里有一个疑问没有想通,写在这里,如果有人知道答案,希望不腻赐教。第一次把特定primitive渲染成红色后再通过正常渲染渲染两个spider,那么按理,那个特定的primitve会被再次绘制渲染(正常渲染的时候),那么深度信息应该是和前一次一致的,为什么最终却显示的红色而不是模型纹理的颜色了?
3DPicking

Basic Tessellation

The tessellation process doesn’t operate on OpenGL’s classic geometric primitives: points, lines, and triangles, but uses a new primitive called a patch (Tessellation shader就是针对patch来进行处理的而并非点,线,三角形)

Patch is just an ordered list of vertices (在tessellation shader里面比较重要的概念就是这个patch,patch是一系列的顶点,OpenGL规定patch的vertex数量必须至少大于等于3)。这里的Patch我们可以理解为一个包含了几何图形的所有Control Points(CP)的集合。Control Points会决定这个几何图形最终的形态。

让我们来看看Tessellation Shader在OpenGL Pipeline里的执行顺序(下图来源
TessellationShaderProcess

Two Shader Stage, One fixed function:

  1. Tessellation Control Shader(TCS)
    “The control shader calculates a set of numbers called Tessellation Levels (TL). The TLs determine the Tessellation level of detail - how many triangles to generate for the patch.”
    可以看出TCS并不是负责顶点的细分而是负责指定细分的规则(如何细分,细分程度)。

    上述Tessellation Levels(TL)的计算就比较灵活,可以根据摄像机距离也可以根据屏幕最终所在像素多少来决定细分方式。

    Note:
    “It is executed once per CP in the output patch”

  2. Primitive Generator (Fixed function)
    “OpenGL passes the output of the tessellation control shader to the primitive generator, which generates the mesh of geometric primitives and tessellation coordinates that the tessellation evaluation shader stage uses.”(PG之后会输出domain细分后的顶点和顶点纹理坐标信息,通过顶点纹理信息TES会算出对应的顶点位置信息)

    通过TCS指定的规则去细分。
    这里需要理解一个概念 - Domain
    细分的规则跟Domain的类型有关
    下面我们来看看Quad Domain和Triangle Domain:
    Domains

    不同类型的domain – 会决定我们inner和outer的具体含义:
    Quad Tessellation:
    ……

    Isoline Tessellation:
    Use only two of the outer-tessellation levels to determine the amount of subdivision

    Triangle Tessellation:
    Triangular domains use barycentric coordinates to specify their Tessellation coordinates

    从上面可以看出三中不同的Domain有不同的细分规则。
    三角形是通过质心去做细分的。
    下面来看看三角形细分后的结果:
    TriangleDomainSubdivision

  3. Tessellation Evaluation Shader(TES)
    The TES is executed on all generated domain locations.Positions each of the vertices in the final mesh (TES是针对从tessellation control shader和Primitive Generator通过细分后所有patch相关的顶点来进行运算,通过各顶点的gl_TessCoord(顶点在patch里的相对坐标信息)按不同Domain的纹理坐标计算方式计算出相应的纹理坐标,位置信息和法线信息,从而实现细分多边形和修改顶点信息效果)

    接下来让我们结合事例学习理解:
    Basic Tessellation Tutorial
    该Tutorial实现下列几个功能:

    1. 根据quad.obj模型三角形边与camera的距离去决定LOD的细分程度

    2. 通过读取高度图去作为对应顶点的高度信息,且实现通过+-控制高度图的所占比例

    3. 可以通过z键开启wireframe模式查看细分情况

      接下来看看主要的实现步骤:

      1. 加载并设置height map和color map作为高度图和纹理图
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
static bool InitializeTesselationInfo()
{
......
//加载height map和color map
gPDisplacementMap = new Texture(GL_TEXTURE_2D, "../Content/heightmap.jpg");

if(!gPDisplacementMap->Load())
{
assert(false);
return false;
}

gPDisplacementMap->Bind(DISPLACEMENT_TEXTURE_UNIT);

glActiveTexture(GL_TEXTURE0);

gPColorMap = new Texture(GL_TEXTURE_2D, "../Content/diffuse.jpg");

if(!gPColorMap->Load())
{
assert(false);
return false;
}

gPColorMap->Bind(COLOR_TEXTURE_UNIT);

......
}

static bool InitializeLight()
{
......

//设置之前加载的height map和color map作为高度图和纹理图
gLightingTechnique.Enable();
gLightingTechnique.SetDirectionalLight(gDirLight);
gLightingTechnique.SetColorTextureUnit(COLOR_TEXTURE_UNIT_INDEX);
gLightingTechnique.SetDisplacementMapTextureUnit(DISPLACEMENT_TEXTURE_UNIT_INDEX);
gLightingTechnique.SetDispFactor(gDisFactor);
......
}
	2. 编译连接含TCS和TES的Shader
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
bool LightingTechnique::Init()
{
if (!Technique::Init()) {
return false;
}

if (!AddShader(GL_VERTEX_SHADER, "lighting.vs")) {
return false;
}

if (!AddShader(GL_TESS_CONTROL_SHADER, "lighting.cs")) {
return false;
}

if (!AddShader(GL_TESS_EVALUATION_SHADER, "lighting.es")) {
return false;
}

if (!AddShader(GL_FRAGMENT_SHADER, "lighting.fs")) {
return false;
}

if (!Finalize()) {
return false;
}

......
}
	3. 以GL_PATCHES方式绘制quad,触发Tessellation Shader
1
2
3
4
5
6
7
8
void Mesh::Render(IRenderCallbacks* pRenderCallbacks)
{
......
//通过设置绘制类型是GL_PATCHES触发Tessellation Shader
glDrawElements(GL_PATCHES, m_Entries[i].NumIndices, GL_UNSIGNED_INT, 0);

......
}
	4. 延迟VP坐标转换(因为Tessellation Shader会细分出更多的顶点,所以这一步从VS延迟到了TES)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
lighting.vs
#version 410 core
layout (location = 0) in vec3 Position_VS_in;
layout (location = 1) in vec2 TexCoord_VS_in;
layout (location = 2) in vec3 Normal_VS_in;
uniform mat4 gWorld

out vec3 WorldPos_CS_in;
out vec2 TexCoord_CS_in;
out vec3 Normal_CS_in;

void main()
{
//注意这里我们没有像平时一样对世界坐标系下的顶点信息进行观察坐标系和投影转换
//因为Tessellation Shader会细分出更多的顶点,所以这一步从VS延迟到了TES
WorldPos_CS_in = (gWorld * vec4(Position_VS_in, 1.0)).xyz;
TexCoord_CS_in = TexCoord_VS_in;
Normal_CS_in = (gWorld * vec4(Normal_VS_in, 0.0)).xyz;
}
	5. TCS,指定patch顶点数量和细分方式(这里实现了细分程度跟patch各顶点到camera的距离有关)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
lighting.cs
#version 410 core
// 指定patch的顶点组成数
// 我们也可以通过在程序里调用glPatchParameteri() -- 告诉程序我们定义多少个顶点为一个patch
layout (vertices = 3) out;
uniform vec3 gEyeWorldPos;

// attributes of the input CPs
in vec3 WorldPos_CS_in[];
in vec2 TexCoord_CS_in[];
in vec3 Normal_CS_in[];

// attributes of the output CPs
out vec3 WorldPos_ES_in[];
out vec2 TexCoord_ES_in[];
out vec3 Normal_ES_in[];
// 根据patch各顶点到camera的距离决定patch的细分程度
float GetTessLevel(float Distance0, float Distance1)
{
float AvgDistance = (Distance0 + Distance1) / 2.0;
if (AvgDistance <= 2.0) {
return 10.0;
}
else if (AvgDistance <= 5.0) {
return 7.0;
}
else {
return 3.0;
}
}
void main()
{
// Set the control points of the output patch
// 记录下patch的control point的原始顶点信息,在TES中会参与就算,算出细分的顶点的位置信息
// **gl_InvocationID** is used to access the specific vertex of a patch (gl_InvocationID 用于访问传入patch里的特定顶点)
// 之前我们指定patch的顶点数量是3,TCS是针对patch的顶点来执行的,所以每一个patch会执行3次TCS
TexCoord_ES_in[gl_InvocationID] = TexCoord_CS_in[gl_InvocationID];
Normal_ES_in[gl_InvocationID] = Normal_CS_in[gl_InvocationID];
WorldPos_ES_in[gl_InvocationID] = WorldPos_CS_in[gl_InvocationID];

// Calculate the distance from the camera to the three control points
// 算出patch各顶点到camera的距离
float EyeToVertexDistance0 = distance(gEyeWorldPos, WorldPos_ES_in[0]);
float EyeToVertexDistance1 = distance(gEyeWorldPos, WorldPos_ES_in[1]);
float EyeToVertexDistance2 = distance(gEyeWorldPos, WorldPos_ES_in[2]);

// Calculate the tessellation levels
// 根据patch各顶点到camera的距离设置细分方式和细分程度
// **gl_TessLevelInner**
// Specify how the interior of the domain is subdivided and stored in a two element array named gl_TessLevelInner(指定多边形内部如何细分)
// **gl_TessLevelOuter**
// Control how the perimeter of the domain is subdivided, and is stored in an implicitly declared four-element array named gl_TessLevelOuter(指定多边形边界上的边被如何细分)
// gl_TessLevelInner & gl_TessLevelOuter 根据Domain的类型不同会有不同的含义,参见前面提到的Domain
// 我们也可以在程序里通过调用glPatchParameterfv()指定inner和outer的数值
gl_TessLevelOuter[0] = GetTessLevel(EyeToVertexDistance1, EyeToVertexDistance2);
gl_TessLevelOuter[1] = GetTessLevel(EyeToVertexDistance2, EyeToVertexDistance0);
gl_TessLevelOuter[2] = GetTessLevel(EyeToVertexDistance0, EyeToVertexDistance1);
gl_TessLevelInner[0] = gl_TessLevelOuter[2];
}
	6. TES,利用细分出的所有相关patch顶点的gl_TessCoord(顶点在patch里的相对位置信息),算出各顶点的纹理坐标信息,位置信息,法线信息(这里通过读取高度图的值参与顶点的高度计算实现动态控制高度值运算),然后转换所有patch相关的顶点位置信息到投影坐标系
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
lighting.es
#version 410 core
// layout (quads, equal_spacing, ccw) in; (指定新生成的多边形类型等相关信息)
layout(triangles, equal_spacing, ccw) in;

uniform mat4 gVP;
uniform sampler2D gDisplacementMap;
uniform float gDispFactor;

in vec3 WorldPos_ES_in[];
in vec2 TexCoord_ES_in[];
in vec3 Normal_ES_in[];

out vec3 WorldPos_FS_in;
out vec2 TexCoord_FS_in;
out vec3 Normal_FS_in;

vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2)
{
return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2;
}

vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2)
{
// 因为前面我们指定了生成的多边形类型是triangle,所以这里按照triangle domian的计算方式去计算位置信息
// **gl_TessCoord**包含了当前顶点的在patch里的坐标信息
return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2;
}

void main()
{
// Interpolate the attributes of the output vertex using the barycentric coordinates
// 通过细分后得到的各patch顶点的相对坐标信息gl_TessCoord,用对应Domain的计算方式算出各顶点的位置,法线,纹理信息
TexCoord_FS_in = interpolate2D(TexCoord_ES_in[0], TexCoord_ES_in[1], TexCoord_ES_in[2]);
Normal_FS_in = interpolate3D(Normal_ES_in[0], Normal_ES_in[1], Normal_ES_in[2]);
Normal_FS_in = normalize(Normal_FS_in);
WorldPos_FS_in = interpolate3D(WorldPos_ES_in[0], WorldPos_ES_in[1], WorldPos_ES_in[2]);

// Displace the vertex along the normal
// 这里主要是读取之前加载的高度图信息,通过顶点法线方向运算作用于顶点位置信息,实现动态控制顶点高度信息
float Displacement = texture(gDisplacementMap, TexCoord_FS_in.xy).x;
WorldPos_FS_in += Normal_FS_in * Displacement * gDispFactor;
// 最后针对所有的顶点做观察坐标系投影和透视投影,使其正确映射到屏幕位置
gl_Position = gVP * vec4(WorldPos_FS_in, 1.0);
}
	7. 存储所有细分的顶点位置信息(世界坐标系),顶点法线信息(世界坐标系)和顶点纹理坐标信息参与正常的光照计算得出最后的纹理颜色
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#version 410 core                                                                           

const int MAX_POINT_LIGHTS = 2;
const int MAX_SPOT_LIGHTS = 2;

// 这是在TES中存储下来的位于世界坐标系的顶点位置信息和顶点法线信息
in vec2 TexCoord_FS_in;
in vec3 Normal_FS_in;
in vec3 WorldPos_FS_in;

out vec4 FragColor;
......

void main()
{
// 用位于世界坐标系的顶点位置信息和法线信息参与光照运算,得出最后的纹理颜色信息
vec3 Normal = normalize(Normal_FS_in);
vec4 TotalLight = CalcDirectionalLight(Normal);

for (int i = 0 ; i < gNumPointLights ; i++) {
TotalLight += CalcPointLight(gPointLights[i], Normal);
}

for (int i = 0 ; i < gNumSpotLights ; i++) {
TotalLight += CalcSpotLight(gSpotLights[i], Normal);
}

// 这里读取的是我们之前加载的color map
FragColor = texture(gColorMap, TexCoord_FS_in.xy) * TotalLight;
}
	8. 相关控制(高度图对顶点位置生成的控制,wireframe控制)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
static void KeyboardCB(unsigned char key, int x, int y)
{
switch(key)
{
case 'q':
glutLeaveMainLoop();
break;
case OGLDEV_KEY_PLUS:
gDisFactor += 0.01f;
break;

case OGLDEV_KEY_MINUS:
if (gDisFactor >= 0.01f) {
gDisFactor -= 0.01f;
}
break;

case 'z':
gIsWireFrame = !gIsWireFrame;
// 这里是wireframe的开关控制
if (gIsWireFrame) {
glPolygonMode(GL_FRONT, GL_LINE);
}
else {
glPolygonMode(GL_FRONT, GL_FILL);
}
break;
}
}

static void RenderPass()
{
......
// 这里就是我们动态控制高度图对生成顶点的位置信息的影响参数的传递
// 具体运算参见lighting.es
// WorldPos_FS_in += Normal_FS_in * Displacement * gDispFactor;
gLightingTechnique.SetDispFactor(gDisFactor);

gQuad->Render(NULL);
}

Final Effect:
TessellationFill
TessellationClose
TessellationFar
TessellationHeightMap

总结:
tessellation shader是可选的shader,不是必须的

tessellation shader与vertex shader不一样,tessellation shader是针对patch(一系列顶点)来处理而不是一个顶点 (因为tessellation shader需要通过传入的patch(一系列顶点)来生成新顶点的位置信息)

tessellation control shader负责对patch的细分设定(通过指定细分的计算方式可以实现LOD(level of detail – 根据与camera的距离不同而细分程度不同)等效果)

primitive generator负责对domian的细分

tessellation evaluation shader负责通过PG细分出来的顶点在patch里的坐标信息去计算顶点位置,纹理,法线信息

Bezier曲线在这里是一种细分后位置的计算方法来实现曲面的平滑效果

还有一个应用叫displacement mapping,在tessellation evaluation shader里面通过tessellation coordinate的值来映射纹理(sample a texture)

关于Bezier曲线学习,参考PN Triangles Tessellation

Vertex Array Objects

“The Vertex Array Object (a.k.a VAO) is a special type of object that encapsulates all the data that is associated with the vertex processor. Instead of containing the actual data, it holds references to the vertex buffers, the index buffer and the layout specification of the vertex itself.”

“VAOs store all of the links between the attributes and your VBOs with raw vertex data.”

从上面的定义来看,可以看出,Vertex Array Object(VAO) 主要是用于存储关联的顶点buffer索引,顶点buffer定义的数据访问格式等信息而非真正的顶点数据。当我们需要去绘制某个特定的顶点buffer的时候,我们只需要指定好该顶点buffer的数据访问格式和数据内容,然后绑定到特定的VAO,最后激活该VAO并进行会绘制即可。

让我们来看看两种存储数据的格式AOS(Array Of Structure),SOA(Structure Of Arrays):
AOSAndSOA

事例是采取了SOA的形式存储数据。

  1. 在定义VBO之前,我们需要生成VAO,并绑定到该VAO上(这样一来后续的VBO操作都会绑定到该VAO上被记录下来(比如顶点buffer索引,buffer数据的访问方式等))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#define INDEX_BUFFER 0
#define POS_VB 1
#define NORMAL_VB 2
#define TEXCOORD_VB 3

bool BasicMesh::LoadMesh(const string& Filename)
{
// Release the previously loaded mesh (if it exists)
Clear();

// Create the VAO
glGenVertexArrays(1, &m_VAO);
glBindVertexArray(m_VAO);

// Create the buffers for the vertices attributes
glGenBuffers(ARRAY_SIZE_IN_ELEMENTS(m_Buffers), m_Buffers);

......

// Make sure the VAO is not changed from the outside
glBindVertexArray(0);

return Ret;
}
  1. 采取SOA的方式存储顶点相关数据(下面定义了4个vector用于存储Positions,Normals,TexCoords,Indices),并绑定到Array Buffer,指明访问方式
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
bool BasicMesh::InitFromScene(const aiScene* pScene, const string& Filename)
{
m_Entries.resize(pScene->mNumMeshes);
m_Textures.resize(pScene->mNumMaterials);

vector<Vector3f> Positions;
vector<Vector3f> Normals;
vector<Vector2f> TexCoords;
vector<unsigned int> Indices;

unsigned int NumVertices = 0;
unsigned int NumIndices = 0;

// Count the number of vertices and indices
for (unsigned int i = 0 ; i < m_Entries.size() ; i++) {
m_Entries[i].MaterialIndex = pScene->mMeshes[i]->mMaterialIndex;
m_Entries[i].NumIndices = pScene->mMeshes[i]->mNumFaces * 3;
m_Entries[i].BaseVertex = NumVertices;
m_Entries[i].BaseIndex = NumIndices;

NumVertices += pScene->mMeshes[i]->mNumVertices;
NumIndices += m_Entries[i].NumIndices;
}

// Reserve space in the vectors for the vertex attributes and indices
Positions.reserve(NumVertices);
Normals.reserve(NumVertices);
TexCoords.reserve(NumVertices);
Indices.reserve(NumIndices);

// Initialize the meshes in the scene one by one
for (unsigned int i = 0 ; i < m_Entries.size() ; i++) {
const aiMesh* paiMesh = pScene->mMeshes[i];
InitMesh(paiMesh, Positions, Normals, TexCoords, Indices);
}

if (!InitMaterials(pScene, Filename)) {
return false;
}

// Generate and populate the buffers with vertex attributes and the indices
// 下面就是存储成SOA的格式
glBindBuffer(GL_ARRAY_BUFFER, m_Buffers[POS_VB]);
glBufferData(GL_ARRAY_BUFFER, sizeof(Positions[0]) * Positions.size(), &Positions[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(POSITION_LOCATION);
glVertexAttribPointer(POSITION_LOCATION, 3, GL_FLOAT, GL_FALSE, 0, 0);

glBindBuffer(GL_ARRAY_BUFFER, m_Buffers[TEXCOORD_VB]);
glBufferData(GL_ARRAY_BUFFER, sizeof(TexCoords[0]) * TexCoords.size(), &TexCoords[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(TEX_COORD_LOCATION);
glVertexAttribPointer(TEX_COORD_LOCATION, 2, GL_FLOAT, GL_FALSE, 0, 0);

glBindBuffer(GL_ARRAY_BUFFER, m_Buffers[NORMAL_VB]);
glBufferData(GL_ARRAY_BUFFER, sizeof(Normals[0]) * Normals.size(), &Normals[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(NORMAL_LOCATION);
glVertexAttribPointer(NORMAL_LOCATION, 3, GL_FLOAT, GL_FALSE, 0, 0);

glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_Buffers[INDEX_BUFFER]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices[0]) * Indices.size(), &Indices[0], GL_STATIC_DRAW);

return GLCheckError();
}

void BasicMesh::InitMesh(const aiMesh* paiMesh,
vector<Vector3f>& Positions,
vector<Vector3f>& Normals,
vector<Vector2f>& TexCoords,
vector<unsigned int>& Indices)
{
const aiVector3D Zero3D(0.0f, 0.0f, 0.0f);

// Populate the vertex attribute vectors
for (unsigned int i = 0 ; i < paiMesh->mNumVertices ; i++) {
const aiVector3D* pPos = &(paiMesh->mVertices[i]);
const aiVector3D* pNormal = &(paiMesh->mNormals[i]);
const aiVector3D* pTexCoord = paiMesh->HasTextureCoords(0) ? &(paiMesh->mTextureCoords[0][i]) : &Zero3D;

Positions.push_back(Vector3f(pPos->x, pPos->y, pPos->z));
Normals.push_back(Vector3f(pNormal->x, pNormal->y, pNormal->z));
TexCoords.push_back(Vector2f(pTexCoord->x, pTexCoord->y));
}

// Populate the index buffer
for (unsigned int i = 0 ; i < paiMesh->mNumFaces ; i++) {
const aiFace& Face = paiMesh->mFaces[i];
assert(Face.mNumIndices == 3);
Indices.push_back(Face.mIndices[0]);
Indices.push_back(Face.mIndices[1]);
Indices.push_back(Face.mIndices[2]);
}
}
  1. 最后绘制的时候,调用glBindVertexArray绑定到特定VAO上,然后调用glDrawElementsBaseVertex指明如何利用VAO绑定的buffer去绘制即可
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
void BasicMesh::Render()
{
glBindVertexArray(m_VAO);

for (unsigned int i = 0 ; i < m_Entries.size() ; i++) {
const unsigned int MaterialIndex = m_Entries[i].MaterialIndex;

assert(MaterialIndex < m_Textures.size());

if (m_Textures[MaterialIndex]) {
m_Textures[MaterialIndex]->Bind(COLOR_TEXTURE_UNIT);
}

//这里绘制的参数需要理解一下
/*
// Count the number of vertices and indices
for (unsigned int i = 0 ; i < m_Entries.size() ; i++) {
m_Entries[i].MaterialIndex = pScene->mMeshes[i]->mMaterialIndex;
m_Entries[i].NumIndices = pScene->mMeshes[i]->mNumFaces * 3;
m_Entries[i].BaseVertex = NumVertices;
m_Entries[i].BaseIndex = NumIndices;

NumVertices += pScene->mMeshes[i]->mNumVertices;
NumIndices += m_Entries[i].NumIndices;
}
*/
// 因为我们在array buffer里存储数据是采用了SOA的格式,所以在调用glDrawElementsBaseVertex的时候,我们需要指明正确的indices和basevertex索引才能正确绘制
// 在前面的代码我们m_Entries[i].BaseIndex记录下了到该Entries时所有Indices累加数量,
// 这里因为Assimp提供的indice索引是从0开始的,但我们存储了所有Entries的indices到index buffer里,
// 所以我们需要存储的是绘制该Entries时所累加的indices值作为正确索引
// m_Entries[i].BaseVertex记录下了到该Entries时所有已绘制的顶点累加的数量,
// 这里同理,为了找到正确的base index,我们需要指明累加后的顶点数量作为offset
// 调用glDrawElementsBaseVertex绘制每一个Entries时,我们需要指明正确的indices索引才能正确绘制
glDrawElementsBaseVertex(GL_TRIANGLES,
m_Entries[i].NumIndices,
GL_UNSIGNED_INT,
(void*)(sizeof(unsigned int) * m_Entries[i].BaseIndex),
m_Entries[i].BaseVertex);
}

// Make sure the VAO is not changed from the outside
glBindVertexArray(0);
}

Final Effect(由于第三个模型数据加载出了问题,这里只加载显示了两个):
VAOFinalEffect

更多学习参考Drawing polygons & OpenGL-Draw-Call-Code-Study-Analysis

Instanced Rendering

“Instanced rendering means that we can render multiple instances in a single draw call and provide each instance with some unique attributes.”(在一次draw call里绘制多个同一个instance)

Using the Instance Counter in Shaders:
The index of the current instance is available to the vertex shader in the built-in variable gl_InstanceID. This variable is implicitly declared as an integer. It starts counting from zero and counts up one each time an instance is rendered.

Instancing Redux:
Steps:

  1. Create some vertex shader inputs that you intend to be instanced
  2. Set the vertex attribute divisors with glVertexAttribDivisor()
  3. Use the gl_InstanceID built-in variable in the vertex shader
  4. Use the instanced versions of the rendering functions such as glDrawArraysInstanced() ……
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
#define WVP_LOCATION 3
#define WORLD_LOCATION 7

bool Mesh::InitFromScene(const aiScene* pScene, const string& Filename)
{
......

glBindBuffer(GL_ARRAY_BUFFER, m_Buffers[WVP_MAT_VB]);

for (unsigned int i = 0; i < 4 ; i++) {
glEnableVertexAttribArray(WVP_LOCATION + i);
// Note: "A vertex attribute can contain no more than 4 floating points or integers."
// 因为vertex attribute不能超过4个float或integers,所以我们需要针对mat4每一行进行指定访问方式
glVertexAttribPointer(WVP_LOCATION + i, 4, GL_FLOAT, GL_FALSE, sizeof(Matrix4f),
(const GLvoid*)(sizeof(GLfloat) * i * 4));
// 这里是"makes this an instance data rather than vertex data."
// 第一个参数指明特定attribute是instance data而非vertex data,
// 第二个参数指明instance data的使用频率,比如1表示每一个instance渲染后就访问下一个atrribute值,2表示每两个
glVertexAttribDivisor(WVP_LOCATION + i, 1);
}

glBindBuffer(GL_ARRAY_BUFFER, m_Buffers[WORLD_MAT_VB]);

for (unsigned int i = 0; i < 4 ; i++) {
glEnableVertexAttribArray(WORLD_LOCATION + i);
glVertexAttribPointer(WORLD_LOCATION + i, 4, GL_FLOAT, GL_FALSE, sizeof(Matrix4f),
(const GLvoid*)(sizeof(GLfloat) * i * 4));
glVertexAttribDivisor(WORLD_LOCATION + i, 1);
}

return GLCheckError();
}

void Mesh::Render(unsigned int NumInstances, const Matrix4f* WVPMats, const Matrix4f* WorldMats)
{
// 传递instance data的动态数据
glBindBuffer(GL_ARRAY_BUFFER, m_Buffers[WVP_MAT_VB]);
glBufferData(GL_ARRAY_BUFFER, sizeof(Matrix4f) * NumInstances, WVPMats, GL_DYNAMIC_DRAW);

glBindBuffer(GL_ARRAY_BUFFER, m_Buffers[WORLD_MAT_VB]);
glBufferData(GL_ARRAY_BUFFER, sizeof(Matrix4f) * NumInstances, WorldMats, GL_DYNAMIC_DRAW);

glBindVertexArray(m_VAO);

for (unsigned int i = 0 ; i < m_Entries.size() ; i++) {
const unsigned int MaterialIndex = m_Entries[i].MaterialIndex;

assert(MaterialIndex < m_Textures.size());

if (m_Textures[MaterialIndex]) {
m_Textures[MaterialIndex]->Bind(GL_TEXTURE0);
}
// 调用glDrawElementsInstanceBaseVertex来渲染多个instance
glDrawElementsInstancedBaseVertex(GL_TRIANGLES,
m_Entries[i].NumIndices,
GL_UNSIGNED_INT,
(void*)(sizeof(unsigned int) * m_Entries[i].BaseIndex),
NumInstances,
m_Entries[i].BaseVertex);
}

// Make sure the VAO is not changed from the outside
glBindVertexArray(0);
}


virtual void RenderSceneCB()
{
.......

Matrix4f WVPMatrics[NUM_INSTANCES];
Matrix4f WorldMatrices[NUM_INSTANCES];

for (unsigned int i = 0 ; i < NUM_INSTANCES ; i++) {
Vector3f Pos(m_positions[i]);
Pos.y += sinf(m_scale) * m_velocity[i];
p.WorldPos(Pos);
// 这里需要注意,这里之所以要转置之后再传递的原因如下:
// 在Shader里定义mat4时,OpenGL传递mat4时会以去构造列向量为主的mat4
// 即把传递的mat4的行作为列去构造mat4
// 因为我们这里定义的mat4本来就是基于OpenGL的列向量构造的,
// 所以在传递过去的时候为了保证正确,我们需要先进行转置
// 如果我们的mat4本来就是基于DX的列向量,那么就不需要转置
WVPMatrics[i] = p.GetWVPTrans().Transpose();
WorldMatrices[i] = p.GetWorldTrans().Transpose();
}

m_pMesh->Render(NUM_INSTANCES, WVPMatrics, WorldMatrices);

......
}

lighting.vs
#version 330

layout (location = 0) in vec3 Position;
layout (location = 1) in vec2 TexCoord;
layout (location = 2) in vec3 Normal;
// 这里注意因为vertex attribute不能超过4个float或integers,我们前面指定了每一个mat4四次vertex attribute
// 所以这里WVP location = 3 而 World location = 7
layout (location = 3) in mat4 WVP;
layout (location = 7) in mat4 World;

out vec2 TexCoord0;
out vec3 Normal0;
out vec3 WorldPos0;
// "Since integers cannot be interpolated by the rasterizer we have to mark the output variable as 'flat' (forgetting to do that will trigger a compiler error)."
// 因为integers不能被rasterizer interpolated,所以我们需要使用'flat'关键词避免编译错误
flat out int InstanceID;

void main()
{
gl_Position = WVP * vec4(Position, 1.0);
TexCoord0 = TexCoord;
Normal0 = (World * vec4(Normal, 0.0)).xyz;
WorldPos0 = (World * vec4(Position, 1.0)).xyz;
InstanceID = gl_InstanceID;
}

Final Effect:
InstancedRendering

Note:
gl_InstanceID is always present in the vertex shader, even when the current drawing command is not one of the instanced ones.

GLFX - An OpenGL Effect Library

首先让我们了解一下,什么是Effect file?
“An effect is a text file that can potentially contain multiple shaders and functions and makes it easy to combine them together into programs. This overcomes the limitation of the glShaderSource() function that requires you to specify the text of a single shader stage.”
可以看出,通过effect file,我们可以把所有shader写到一个文件里,不用再创建针对各个stage的shader的文件。这样一来我们在shader里定义的结构体就能在多个shader共用。

那什么是GLFX了?
“Effects system for OpenGL and OpenGL ES”
GLFX提供了方便的接口去转换effect file到GLSL program.

GLFX源码下载地址

接下来让我们看看,如何使用GLFX支持effect file。

  1. 编译生成并添加glfx.lib到引用
  2. 包含glfx.h头文件
1
#include <glfx.h>
  1. 解析effect file
1
2
3
4
5
6
7
8
9
10
11
if (!glfxParseEffectFromFile(effect, "effect.glsl")) {
#ifdef __cplusplus // C++ error handling
std::string log = glfxGetEffectLog(effect);
std::cout << "Error parsing effect: " << log << std::endl;
#else // C error handling
char log[10000];
glfxGetEffectLog(effect, log, sizeof(log));
printf("Error parsing effect: %s:\n", log);
#endif
return;
}
  1. 编译并启用Effect program
1
2
3
4
5
6
7
int shaderProg = glfxCompileProgram(effect, "ProgramName");

if (shaderProg < 0) {
// same error handling as above
}

glUseProgram(shaderProg);
  1. Release effect file after we no longer use it
1
glfxDeleteEffect(effect); 

接下来看看编写Effect file有哪些不同于GLSL Shader的地方

  1. 使用’program’ key word去定义一个program,并在其中包含各Shader的调用路口
1
2
3
4
5
6
program Lighting
{
//VSmain() 和 FSmain()分别定义了vs和fs的人口函数
vs(410)=VSmain();
fs(410)=FSmain();
};
  1. 使用’shader’ key word去定义各shader stage的函数入口而非void
1
2
3
4
shader VSmain()
{
calculate_something();
}
  1. 可以定义多个program在Effect file中,只需通过glfxCompileProgram()去指定编译特定program即可
  2. 因为所有shader内容都写在一个文件里了,所以支持共用struct定义,不用再定义一个个in or out variables
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
struct VSoutput
{
vec2 TexCoord;
vec3 Normal;
};

shader VSmain(in vec3 Pos, in vec2 TexCoord, in vec3 Normal, out VSOutput VSout)
{
// do some transformations and update 'VSout'
VSout.TexCoord = TexCoord;
VSout.Normal = Normal;
}

shader FSmain(in VSOutput FSin, out vec4 FragColor)
{
// 'FSin' matches 'VSout' from the VS. Use it
// to do lighting calculations and write the final output to 'FragColor'
}
  1. 可以在Effect file里直接包含其他Effect file(但新包含的文件并不参与GLFX Parse,并且该文件是以直接插入的形式,所以该文件只能包含pure GLSL不能包含GLFX里的一些定义方式)
1
#include "another_effect.glsl" 
  1. 通过:后缀,快速定义attribute的位置而不是通过一个个layout(location=……)
1
2
3
4
5
6
7
struct VSInput2
{
vec3 Normal;
vec3 Tangent;
};

shader VSmain(in vec3 Pos : 5, in vec2 TexCoord : 6, in float colorScale : 10)
  1. 一些关键词如’flat’,‘noperspective’修饰的变量不能放在Effect file定义的struct里,只能通过interface去定义,而interface又必须通过再次拷贝内容到struct才能在Effect里使用
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
be passed between shader stages. If you need to pass it as a whole to another function you will need to copy the contents to a struct. For example:
interface foo
{
flat int a;
noperspective float b;
};

struct bar
{
int a;
float b;
}

shader VSmain(out foo f)
{
// ...
}

void Calc(bar c)
{
// ...
}

shader FSmain(in foo f)
{
struct bar c;
c.a = f.a;
c.b = f.b;

Calc(c);
}
  1. glfxc工具,可以用于外部单独解析编译Effect file,提前查看是否有问题(这个本人没有试,因为我没有编译出glfxc。)
    glfxc

Final Effect:
GLFX

Note:
“GLFX is dependant on GLEW(注意GLFX是依赖于GLEW的,编译GLFX的时候会需要指定GLEW的路径)”

Deferred Shading

在了解什么是Deferred Shading之前,我们需要了解与之对应的Forward Rendering。
What is forward rendering?
Forward Rendering就是我们之前一直采用的,给GPU传入geometry,texture数据,然后每一个vertex通过pipeline(VS,GS,FS……)得出最后的render target显示在screen上。
下图来源
ForwardRendering

既然有了Forward Rendering,为什么我们还需要Deferred Shading了?

  1. Since each pixel of every object gets only a single FS invocation we have to provide the FS with information on all light sources and take all of them into account when calculating the light effect per pixel. This is a simple approach but it has its downsides. If the scene is highly complex (as is the case in most modern games) with many objects and a large depth complexity (same screen pixel covered by several objects) we get a lot of wasted GPU cycles. (第一个问题大量无用的光照计算。简单的说就是传统的Forward Rendering是针对每一个传入的顶点都会经历一套完整的Pipeline(包含参与光照计算)。但在大型游戏里,会有很多物体(顶点数据),但最终只有离camera最近或者透明的一部分物体会显示在屏幕上,这样一来,针对每一个顶点都计算光照就会做很多无用功。)

  2. When there are many light sources, forward rendering simply doesn’t scale well with many light sources.(因为Forward Redenring每一个pixel都会参与光照计算,当场景里光照很多的时候,无论光源对物体的影响有多微弱或多强,Forward Rendering都会一一计算这些光照对物体的影响,这样会导致大量的光照计算。)

而Deferred Shading却没有上述问题。
那么让我们来了解一下什么是Deferred Shading
deferred shading is a screen-space shading technique. It is called deferred because no shading is actually performed in the first pass of the vertex and pixel shaders: instead shading is “deferred” until a second pass.
下图来源
DeferredShading

从上面我们只能看出,Deferred Shading是针对scree-space而非每一个物体的vertex。并且deferred shading是由两个pass构成,第二个pass才是真正的shading。
接下来让我们看看这两个pass:

  1. Geometry Pass. Data that is required for shading computation is gathered. Positions, normals, and materials for each surface are rendered into the geometry buffer (G-buffer) using “render to texture.(Multiple Render Targets (MRT)(在第一个pass我们并不像Forward Rendering把所有光照计算相关的数据传入FS而是存入geometry buffer(G-buffer)用于第二个pass进行真正的Shading。因为我们存储在G-buffer里的数据都是经过rasterizer的,所以在G-buffer里我们只存储了通过depth test的pixel,这样一来我们在第二次用G-buffer数据计算光照的时候就避免了无谓的光照计算(这里指没通过depth test的pixel))
    让我们来看看G-buffer都存储些什么数据,下图来源
    G-Buffer
    可以看出,我们存储了所有参与光照计算所需要的数据。
    Geometry Pass的主要目的是生成4个关于Position,Diffuse,Normal,TexCoord的纹理贴图和一个关于Depth的纹理贴图。
    Geometry Pass主要由以下几个步骤:
    1. 创建m_FBO
1
2
3
4
5
6
7
8
bool GBuffer::Init(unsigned int windowwidth, unsigned int windowheight)
{
//Create the FBO
glGenFramebuffers(1, &m_FBO);
glBindFramebuffer(GL_FRAMEBUFFER, m_FBO);

......
}
2. 创建4个纹理贴图分别Attach到m_FBO的GL_COLOR_ATTACHMENT*上。单独创建1个纹理贴图Attach到m_FBO的GL_DEPTH_ATTACHMENT上
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
bool GBuffer::Init(unsigned int windowwidth, unsigned int windowheight)
{
//Create the FBO
......

//Create the gbuffer textures
glGenTextures(ARRAY_SIZE_IN_ELEMENTS(m_Textures), m_Textures);
glGenTextures(1, &m_DepthTexture);

for(unsigned int i = 0; i < ARRAY_SIZE_IN_ELEMENTS(m_Textures); i++)
{
glBindTexture(GL_TEXTURE_2D, m_Textures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, windowwidth, windowheight, 0, GL_RGB, GL_FLOAT, 0);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_Textures[i], 0);
}

//depth texture
glBindTexture(GL_TEXTURE_2D, m_DepthTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, windowwidth, windowheight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_DepthTexture, 0);

GLenum drawbuffers[] = { GL_COLOR_ATTACHMENT0,
GL_COLOR_ATTACHMENT1,
GL_COLOR_ATTACHMENT2,
GL_COLOR_ATTACHMENT3};

.......
}
3. 指定需要从FS中输出的Position,Diffuse,Normal,TexCoord绘制的color buffer
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
bool GBuffer::Init(unsigned int windowwidth, unsigned int windowheight)
{
......

// specify the color buffer to be drawn into, we will output buffer data for each color buffer in FS with out key word variable
// 这里应对的是FS里指定的out输出
glDrawBuffers(ARRAY_SIZE_IN_ELEMENTS(drawbuffers), drawbuffers);

......

return true;
}

geometry_pass.vs
// VS没什么变化,只是把我们需要保存的Position,TexCoord,Normal分别转换透视投影坐标系和世界坐标系里
#version 330

layout (location = 0) in vec3 Position;
layout (location = 1) in vec2 TexCoord;
layout (location = 2) in vec3 Normal;

uniform mat4 gWVP;
uniform mat4 gWorld;

out vec2 TexCoord0;
out vec3 Normal0;
out vec3 WorldPos0;


void main()
{
gl_Position = gWVP * vec4(Position, 1.0);
TexCoord0 = TexCoord;
Normal0 = (gWorld * vec4(Normal, 0.0)).xyz;
WorldPos0 = (gWorld * vec4(Position, 1.0)).xyz;
}

geometry_pass.fs
// FS负责把转换后的Position,Diffuse,Normal,TexCoordOut输出到我们之前绑定的GL_COLOR_ATTACHMENT*上
#version 330

in vec2 TexCoord0;
in vec3 Normal0;
in vec3 WorldPos0;

layout (location = 0) out vec3 WorldPosOut;
layout (location = 1) out vec3 DiffuseOut;
layout (location = 2) out vec3 NormalOut;
layout (location = 3) out vec3 TexCoordOut;

uniform sampler2D gColorMap;

void main()
{
WorldPosOut = WorldPos0;
DiffuseOut = texture(gColorMap, TexCoord0).xyz;
NormalOut = normalize(Normal0);
TexCoordOut = vec3(TexCoord0, 0.0);
}
4. 调用Geometry Pass生成对应的纹理贴图信息
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
static void DSGeometryPass()
{
gDSGeomPassTech.Enable();
// 输出到Color Texture里之前,
// 我们需要指定我们需要绘制到的FrameBuffer是我们Color Texture所绑定的m_FBO
gGbuffer.BindForWriting();

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

Pipeline p;
p.Scale(0.1f, 0.1f, 0.1f);
p.Rotate(0.0f, gScale, 0.0f);
p.WorldPos(-0.8f, -1.0f, 12.0f);
p.SetCamera(pGameCamera->GetPos(), pGameCamera->GetTarget(), pGameCamera->GetUp());
p.SetPerspectiveProj(gPersProjInfo);

gDSGeomPassTech.SetWVP(p.GetWVPTrans());
gDSGeomPassTech.SetWorldMatrix(p.GetWorldTrans());

gMesh.Render();
}
5. 将生成的4个Color Texture复制到FrameBuffer 0里,然后渲染到屏幕上
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
static void DSLightPass()
{
// Bound frame buffer target 0 to draw state, we will copy four color buffer to this buffer later
// 因为glBlitFramebuffer()函数是把GL_READ_FRAMEBUFFER的target copy到GL_DRAW_FRAMEBUFFER的target上,
// 所以我们要声明Frame buffer 0作为我们最终的目的地
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// Bound frame buffer m_FBO to reading state,
// we will copy four color buffer that attach to m_FBO to frame buffer target 0 later
// BindForReading()就是设置m_FBO作为glBlitFramebuffer()里的copy来源,
// 所以需要设置成GL_READ_FRAMEBUFFER
gGbuffer.BindForReading();

GLint halfwidth = (GLint)(WINDOW_WIDTH / 2.0f);
GLint halfheight = (GLint)(WINDOW_HEIGHT / 2.0f);

// Color buffer for position
// Set color buffer source for copying
// 因为一次只能从一个texture里copy,所以我们需要指定是copy哪一个
gGbuffer.SetReadBuffer(GBuffer::GBUFFER_TEXTURE_TYPE_POSITION);
// Set buffer copy info
glBlitFramebuffer(0, 0, WINDOW_WIDTH, WINDOW_HEIGHT, 0, 0, halfwidth, halfheight, GL_COLOR_BUFFER_BIT, GL_LINEAR);

// Color buffer for diffuses
gGbuffer.SetReadBuffer(GBuffer::GBUFFER_TEXTURE_TYPE_DIFFUSE);
glBlitFramebuffer(0, 0, WINDOW_WIDTH, WINDOW_HEIGHT, 0,halfheight, halfwidth, WINDOW_HEIGHT, GL_COLOR_BUFFER_BIT, GL_LINEAR);

// Color buffer for normal
gGbuffer.SetReadBuffer(GBuffer::GBUFFER_TEXTURE_TYPE_NORMAL);
glBlitFramebuffer(0, 0, WINDOW_WIDTH, WINDOW_HEIGHT, halfwidth,halfheight, WINDOW_WIDTH, WINDOW_HEIGHT, GL_COLOR_BUFFER_BIT, GL_LINEAR);

// Color buffer for TexCoor
gGbuffer.SetReadBuffer(GBuffer::GBUFFER_TEXTURE_TYPE_TEXCOORD);
glBlitFramebuffer(0, 0, WINDOW_WIDTH, WINDOW_HEIGHT, halfwidth,0, WINDOW_WIDTH, halfheight, GL_COLOR_BUFFER_BIT, GL_LINEAR);
}
	Final Effect:

DeferredShading_GeometryPass
在真正的Deferred Shading中,有几个需要注意的点。
第一个点,我们不需要讲生成的四个Color Texture显示在屏幕上,所以最后一步是可以省去的。
第二个点因为Geometry Pass只需要存储closest pixel,所以我们需要开启GL_DEPTH_TEST,并且设置glDepthMask(GL_TRUE)来防止其他pass写入我们的fbo的depth buffer。
最终geometry pass代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
void DSGeometryPass()
{
m_DSGeomPassTech.Enable();

m_gbuffer.BindForWriting();

// Only the geometry pass updates the depth buffer
glDepthMask(GL_TRUE);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glEnable(GL_DEPTH_TEST);

glDisable(GL_BLEND);

......

// When we get here the depth buffer is already populated and the stencil pass
// depends on it, but it does not write to it.
glDepthMask(GL_FALSE);

glDisable(GL_DEPTH_TEST);
}
	第三个点,因为Lighting Pass里参与计算的的TexCoordnate信息可以通过下列算式计算出来,所以出于节约内存,我们可以不必生成TexCoordnate的Texture(即只需要Position,Diffuse,Normal和Depth(这个后续会用到)四个贴图)。
1
2
3
4
5
vec2 CalcTexCoord()
{
return gl_FragCoord.xy / gScreenSize;
}

	第四个点,因为我们生成的Texture最终会用于Screen的1对1映射计算,所以我们需要把我们生成的Texture指定filter。
1
2
3
4
5
6
7
8
9
10
11
bool GBuffer::Init(unsigned int WindowWidth, unsigned int WindowHeight)
{
...
for (unsigned int i = 0 ; i < ARRAY_SIZE_IN_ELEMENTS(m_textures) ; i++) {
...
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
...
}
...
}
  1. Lighting Pass. A pixel shader computes the direct and indirect lighting at each pixel using the information of the texture buffers in screen space.
    在第二个Lighting Pass里我们只需要用我们在Geometry Pass存储的数据来进行pixel by pixel的光照计算即可,因为我们存储的texture是针对screen space的,所有存储的pixel都是通过了depth test的,所以在Deferred Shading里,我们只针对通过了depth test的pixel进行了光照计算。
    下面我们来看看如何通过已经存储的Position,Diffuse,Normal,Depth信息来得出最终的光照颜色。首先让我们看看整体的轮廓。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
static void RenderCallbackCB()
{
pGameCamera->OnRender();

gScale += 0.05f;

DSGeometryPass();

BeginLightPasses();

DSPointLightPass();

DSDirectionalLightPass();

//Swap buffer
glutSwapBuffers();
}
1. 开启混合模式,因为Deferred Shading现在是每一个pixel都会针对所有相关的光照进行计算,最终的结果将有所有光照计算叠加而成。(因为我们不需要再从我们生成的fbo读取数据了(直接从生成的texture里去读),所以不需要再绑定到生成的fbo上,而是绑定到默认的fbo上,这样一来我们只需设置并绑定我们对应的Texture即可)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
void GBuffer::BindForReading()
{
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);

for (unsigned int i = 0 ; i < ARRAY_SIZE_IN_ELEMENTS(m_textures); i++) {
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, m_textures[GBUFFER_TEXTURE_TYPE_POSITION + i]);
}
}

void BeginLightPasses()
{
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);

m_gbuffer.BindForReading();
glClear(GL_COLOR_BUFFER_BIT);
}
2. 每一个Pixel针对场景里的Point, Direction, Spot Light进行计算得出最终颜色。这里要针对每一种光照进行说明,如何触发正确的计算。

Direction Light因为是全局光,所以我们需要针对每一个Pixel进行计算,这里我们通过一个铺满屏幕的Quad Mesh来触发计算。
Direction Light:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
light_pass.vs
#version 330

layout (location = 0) in vec3 Position;

uniform mat4 gWVP;

void main()
{
gl_Position = gWVP * vec4(Position, 1.0);
}

dir_light_pass.fs
// 这个和之前大部分一样,唯一的区别就是从对应的Texture读取需要参与计算的信息
......

vec2 CalcTexCoord()
{
return gl_FragCoord.xy / gScreenSize;
}

out vec4 FragColor;

void main()
{
vec2 TexCoord = CalcTexCoord();
vec3 WorldPos = texture(gPositionMap, TexCoord).xyz;
vec3 Color = texture(gColorMap, TexCoord).xyz;
vec3 Normal = texture(gNormalMap, TexCoord).xyz;
Normal = normalize(Normal);

FragColor = vec4(Color, 1.0) * CalcDirectionalLight(WorldPos, Normal);
}

void DSDirectionalLightPass()
{
m_DSDirLightPassTech.Enable();
m_DSDirLightPassTech.SetEyeWorldPos(m_pGameCamera->GetPos());
Matrix4f WVP;
// 这里我们使用的quad是(-1,1)to(1,1),通过设置WVP为单位矩阵,这样一来经过rasterizer后,
// (-1,-1)to(1,1)就会被映射到(0,0)to(SCREEN_WIDTH,SCREEN_HEIGHT)即铺满全屏
// 其他都和之前Dir Light计算一样,只是这里有多少个Dir Light就要针对每个Pixel计算多少次
WVP.InitIdentity();
m_DSDirLightPassTech.SetWVP(WVP);
m_quad.Render();
}

Point Light因为是范围光,所以我们需要知道Point Light所影响的范围去触发对应Pixel的光照计算。这里涉及到一个Point Light的Point Light光照削弱方程。这里我也没详细看了,想了解的可以去看一下。通过方程我们得出了Point Light的有效范围,这样一来我们只需要以Point Light所在位置为圆心绘制一个Sphere就能触发正确的Point Light光照计算了。
Point Light:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// Shader没什么变化,只是通过Texture去读取相关数据,这里就不重复了。
......

// 光照削弱计算
float CalcPointLightBSphere(const PointLight& Light)
{
float MaxChannel = fmax(fmax(Light.Color.x, Light.Color.y), Light.Color.z);

float ret = (-Light.Attenuation.Linear + sqrtf(Light.Attenuation.Linear * Light.Attenuation.Linear -
4 * Light.Attenuation.Exp * (Light.Attenuation.Exp - 256 * MaxChannel * Light.DiffuseIntensity)))
/
(2 * Light.Attenuation.Exp);
return ret;
}

void DSPointLightsPass()
{
m_DSPointLightPassTech.Enable();
m_DSPointLightPassTech.SetEyeWorldPos(m_pGameCamera->GetPos());

Pipeline p;
p.SetCamera(m_pGameCamera->GetPos(), m_pGameCamera->GetTarget(), m_pGameCamera->GetUp());
p.SetPerspectiveProj(m_persProjInfo);

// 这里也一样,有多少个Point Light就要触发多少次光照运算
for (unsigned int i = 0 ; i < ARRAY_SIZE_IN_ELEMENTS(m_pointLight); i++) {
m_DSPointLightPassTech.SetPointLight(m_pointLight[i]);
p.WorldPos(m_pointLight[i].Position);
// 绘制指定半径大小的Sphere触发光照计算
float BSphereScale = CalcPointLightBSphere(m_pointLight[i]);
p.Scale(BSphereScale, BSphereScale, BSphereScale);
m_DSPointLightPassTech.SetWVP(p.GetWVPTrans());
m_bsphere.Render();
}
}

实现的过程中我遇到点问题,所以也没有去实现Spot Light,个人认为应该是通过cone(圆锥体)去模拟Spot Light的范围,通过光照削弱方程去计算有效范围。

这里说一下我遇到的问题(没有解决),如果大家有什么头绪,欢迎提出来。
描述上来说,我通过官网的教程根据source code编写后,发现我的只有当camera离的很近的时候才会显示一个box(最初怀疑是PersProjInfo设置的zFar导致的,后来查看是一模一样的。但通过修改源代码的zFar=2.0我得到了相同的结果,但这里无论我如何修改zFar,我的示例始终只有靠近box的时候才显示一部分。)
DS一开始截图
DS靠近后

1
2
3
4
5
6
7
8
9
10
11
12
13
// Source Code
m_persProjInfo.FOV = 60.0f;
m_persProjInfo.Height = WINDOW_HEIGHT;
m_persProjInfo.Width = WINDOW_WIDTH;
m_persProjInfo.zNear = 1.0f;
m_persProjInfo.zFar = 100.0f;

// My Code
gPersProjInfo.FOV = 60.0f;
gPersProjInfo.Height = WINDOW_HEIGHT;
gPersProjInfo.Width = WINDOW_WIDTH;
gPersProjInfo.zNear = 1.0f;
gPersProjInfo.zFar = 100.0f;

通过上述方法计算后,我们得出了我们Deferred Shading后的效果
DS源代码效果

但上述方法还有一些问题:

  1. 当我们靠近Point Light的时候,Point Light光照消失了(这是因为我们之渲染front face,当Camera进入Light Sphere的时候Sphere被cull away了,所以也就不会触发Point Light的计算了。)

  2. 因为Sphere是针对我们生成的Screen Space的Texture而言的,所以有时候有些object其实不在sphere内但在sphere所在的screen space上就参与了计算,这样就错误的给某些object计算了point light。
    解决第二个问题,需要用到Stencil Buffer。
    在此之前让我们先来了解下什么是Stencil Buffer?
    A stencil buffer is an extra buffer, in addition to the color buffer and depth buffer (z-buffering) found on modern graphics hardware. The buffer is per pixel, and works on integer values, usually with a depth of one byte per pixel.
    简单的想,可以把Stencil Buffer理解成PS里面的模板,只有Stencil Buffer里面的数据(数据可以修改)不为0的时候特定像素才能通过。真是因为这个特性我们可以控制哪些pixel参与Point Light光照计算。

Stencil Buffer用于Stencil Test,Stencil Test是针对每一个像素调用,就像我之前说的类似PS的模板。

来我们来看个简单的Stencil Buffer效果,下图来源:
StencilBufferEffect

我们可以指定Stencil Buffer里的值如何修改,什么时候修改。

接下来让我们来看看如何通过Stencil Buffer来解决第二个问题:
以下引用至

  1. Render the objects as usual into the G buffer so that the depth buffer will be properly populated.
  2. Disable writing into the depth buffer. From now on we want it to be read-only
  3. Disable back face culling. We want the rasterizer to process all polygons of the sphere.
  4. Set the stencil test to always succeed. What we really care about is the stencil operation.
  5. Configure the stencil operation for the back facing polygons to increment the value in the stencil buffer when the depth test fails but to keep it unchanged when either depth test or stencil test succeed.
  6. Configure the stencil operation for the front facing polygons to decrement the value in the stencil buffer when the depth test fails but to keep it unchanged when either depth test or stencil test succeed.
  7. Render the light sphere.(only when the stencil value of the pixel is different from zero)

关键思想是通过判断object的front和back face是否在sphere的front和back的前面或后面来修改stencil buffer的值,然后通过该值得出我们所需绘制的pixel。
请看下图:
DeferredShadingStencilBufferUsing
sphere的front和back face都在物体A的后面,物体C的前面,就物体B而言,front face在物体B之前,back face在物体B之后。

所以通过5,6步骤设定的规则,只有物体B所在的所在的pixel的stencil buffer值大于0

上面的2-6算作Stencil Pass,用于得到哪些物体参与Point Light Sphere的计算。

第7步才是真正光照计算。

接下来让我们看看代码实现:

  1. 针对每一个Point Light开启stencil test并在进行关照计算之前调用stencil pass得出需要参与计算的pixel
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
null_technique.vs
#version 330

layout (location = 0) in vec3 Position;

uniform mat4 gWVP;

void main()
{
gl_Position = gWVP * vec4(Position, 1.0);
}

null_technique.fs
// 这个为空,因为在stencil pass我们不需要填充color buffer,我们只需要触发rasterizer即可

// 因为最终pixel会被绘制到G buffer的GL_COLOR_ATTACHMENT4上,
// 所以我们要在每次渲染之前清除GL_COLOR_ATTACHMENT4
void GBuffer::StartFrame()
{
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo);
glDrawBuffer(GL_COLOR_ATTACHMENT4);
glClear(GL_COLOR_BUFFER_BIT);
}



// 在初始化G buffer的时候需要注意,因为我们需要存储stencil值
// 这里depth texture格式变为GL_DEPTH32F_STENCIL8,并且attach到GL_DEPTH_STENCIL_ATTACHMENT上
bool GBuffer::Init(unsigned int WindowWidth, unsigned int WindowHeight)
{
...

// depth
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH32F_STENCIL8, WindowWidth, WindowHeight, 0, GL_DEPTH_STENCIL,
GL_FLOAT_32_UNSIGNED_INT_24_8_REV, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0);

...
}

void GBuffer::BindForStencilPass()
{
// must disable the draw buffers
glDrawBuffer(GL_NONE);
}

void DSStencilPass(unsigned int PointLightIndex)
{
m_nullTech.Enable();

// Disable color/depth write and enable stencil
// 这里使用我们之前创建的fbo(存储了之前object渲染后depth buffer的fbo)
m_gbuffer.BindForStencilPass();
// 因为stencil buffer的值更新跟depth test有关,所以需要开启GL_DEPTH_TEST
glEnable(GL_DEPTH_TEST);

// 为了得到正确的stencil buffer值,我们需要针对sphere进行front和back face渲染
glDisable(GL_CULL_FACE);

// 归零stencil buffer,用于下一个point light
glClear(GL_STENCIL_BUFFER_BIT);

// We need the stencil test to be enabled but we want it
// to succeed always. Only the depth test matters.
// 指定stencil text always success,
// 因为这里我只需要通过设定stencil buffer修改规则就能得到我们要的值了
glStencilFunc(GL_ALWAYS, 0, 0);

// 为stencil buffer指定修改方式,即我们之前提到的6和7步骤
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR_WRAP, GL_KEEP);
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR_WRAP, GL_KEEP);

Pipeline p;
p.WorldPos(m_pointLight[PointLightIndex].Position);
float BBoxScale = CalcPointLightBSphere(m_pointLight[PointLightIndex]);
p.Scale(BBoxScale, BBoxScale, BBoxScale);
p.SetCamera(m_pGameCamera->GetPos(), m_pGameCamera->GetTarget(), m_pGameCamera->GetUp());
p.SetPerspectiveProj(m_persProjInfo);

m_nullTech.SetWVP(p.GetWVPTrans());
// 这样一来我们就得到了针对特定光照sphere的stencil buffer值了
m_bsphere.Render();
}

virtual void RenderSceneCB()
{
......

// 清除G buffer的GL_COLOR_ATTACHMENT4
m_gbuffer.StartFrame();

// We need stencil to be enabled in the stencil pass to get the stencil buffer
// updated and we also need it in the light pass because we render the light
// only if the stencil passes.
glEnable(GL_STENCIL_TEST);

for (unsigned int i = 0 ; i < ARRAY_SIZE_IN_ELEMENTS(m_pointLight); i++) {
DSStencilPass(i);
DSPointLightPass(i);
}

// The directional light does not need a stencil test because its volume
// is unlimited and the final pass simply copies the texture.
glDisable(GL_STENCIL_TEST);

DSDirectionalLightPass();

DSFinalPass();

RenderFPS();

glutSwapBuffers();
}
  1. 调用Point Light Pass通过stencil buffer对特定pixel进行光照计算
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
void GBuffer::BindForLightPass()
{
glDrawBuffer(GL_COLOR_ATTACHMENT4);

for (unsigned int i = 0 ; i < ARRAY_SIZE_IN_ELEMENTS(m_textures); i++) {
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, m_textures[GBUFFER_TEXTURE_TYPE_POSITION + i]);
}
}

void DSPointLightPass(unsigned int PointLightIndex)
{
// 指定参与光照计算的texture
m_gbuffer.BindForLightPass();

m_DSPointLightPassTech.Enable();
m_DSPointLightPassTech.SetEyeWorldPos(m_pGameCamera->GetPos());

// 设置当stencil buffer value不等于0的时候才通过,
// 这样一来就只有通过stencil pass即在光照sphere内的物体才参与计算
glStencilFunc(GL_NOTEQUAL, 0, 0xFF);

// 计算光照不需要Delpth test,只需要叠加计算光照效果即可
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);

// 这里很重要,当我们去计算光照的时候,我们需要开启GL_FRONT Cull,
// 这样一来可以避免camera位于sphere内的时候无法计算光照
// (确保了sphere back face的片面渲染,但如果是GL_BACK,这一部分是会被CULL掉)
// 这样一来就解决了之前说的当camera位于sphere内,无法计算光照的问题
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);

Pipeline p;
p.WorldPos(m_pointLight[PointLightIndex].Position);
float BBoxScale = CalcPointLightBSphere(m_pointLight[PointLightIndex]);
p.Scale(BBoxScale, BBoxScale, BBoxScale);
p.SetCamera(m_pGameCamera->GetPos(), m_pGameCamera->GetTarget(), m_pGameCamera->GetUp());
p.SetPerspectiveProj(m_persProjInfo);
m_DSPointLightPassTech.SetWVP(p.GetWVPTrans());
m_DSPointLightPassTech.SetPointLight(m_pointLight[PointLightIndex]);
m_bsphere.Render();
glCullFace(GL_BACK);

glDisable(GL_BLEND);
}
  1. 最后渲染G buffer里计算得出的图像(之前我们绘制到了G buffer的GL_COLOR_ATTACHMENT4里)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
void GBuffer::BindForFinalPass()
{
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, m_fbo);
glReadBuffer(GL_COLOR_ATTACHMENT4);
}

void DSFinalPass()
{
// 这里我之所以不直接渲染到default FBO的原因是因为,
// 在Point Light Pass的时候我们需要知道depth buffer里的值来决定那些pixel应该参与光照计算
m_gbuffer.BindForFinalPass();
glBlitFramebuffer(0, 0, WINDOW_WIDTH, WINDOW_HEIGHT,
0, 0, WINDOW_WIDTH, WINDOW_HEIGHT, GL_COLOR_BUFFER_BIT, GL_LINEAR);
}

Final Effect:
DeferredShadingFinalEffect

Note:
The key point behind deferred shading is the decoupling of the geometry calculations (position and normal transformations) and the lighting calculations.

OpenGL Utility

Open Asset Import Library

“Open Asset Import Library is a portable Open Source library to import various well-known 3D model formats in a uniform manne”

官方网站

assimp

“assimp is a library to load and process geometric scenes from various data formats. It is tailored at typical game scenarios by supporting a node hierarchy, static or skinned meshes, materials, bone animations and potential texture data. The library is not designed for speed, it is primarily useful for importing assets from various sources once and storing it in a engine-specific format for easy and fast every-day-loading. “

官方文档

Note:
个人理解,assimp主要是提供了对多种格式模型的数据解析,并抽象了所有数据到aiScene这个类里。
通过aiScene我们可以去访问模型数据里的顶点数据,纹理数据,材质数据等。
我们通过这些数据最终去作为我们的顶点数据创建顶点buffer,作为纹理数据创建纹理贴图,最终绘制出我们的模型。

源代码参考OpenGL Tutorial 22

open3mod

“open3mod is a Windows-based model viewer. It loads all file formats that Assimp supports and is perfectly suited to quickly inspect 3d assets.”

主要用于快速查看各种资源格式的模型。
Open3modUsing

GIMP

GNU Image Manipulation Program (GIMP) is a cross-platform image editor available for GNU/Linux, OS X, Windows and more operating systems.
GIMPDownloadLink

gimp-normalmap plugin that supports to export normal map from texture
gimp-normalmapDownloadLink

通过GIMP和gimp-normalmap插件,我们可以从texture中导出normal map使用。

GLSL Debuger

Nsight

NVIDIA® Nsight™ is the ultimate development platform for heterogeneous computing. Work with powerful debugging and profiling tools that enable you to fully optimize the performance of the CPU and GPU. - See more at: http://www.nvidia.com/object/nsight.html#sthash.Hc8TfPMs.dpuf
Nsight是NVIDIA开发的一套协助GPU开发的工具。
优点:

  1. 支持直接调试GLSL和HLSL等着色器语言
  2. 和VS完美集成

缺点:

  1. 硬件限制比较多(比如主要针对NVIDIA公司的显卡)
    Nsight Visual Studio Edition Requirements

OpenGL proiler, debugger

gDEBugger

gDEBugger是一个针对OpenGL和OpenCL开发的一套调试器,分析器和内存分析器等协助工具。
通过gDEBugger我们可以查看在某一贞关于OpenGL相关的大量信息(比如Uniform值,OpenGL的各个状态,Draw call次数等)
可以查看到OpenGL的一些状态,比如GL_CULL_FACE:
gDEBuggerCapture

可以查看Shader的一些信息,并且可以编译Shader等:
gDEBuggerShaderInfo

Reference Website:

OpenGL 4 reference page
client-server 模式
OpenGL Execute Model
X Window System
OpenGL Tutorial
OpenGL Windows & Context
Creating an OpenGL Context (WGL)
OpenGL Context
Window and OpenGL context
OpenGLBook.com Getting Started

Note:
OpenGL uses right-handed coordinate system (OpenGL使用右手坐标系)

.Net Framework

Introduction

.Net Framework是Microsoft为开发应用程序而创建创的一个具有革命意义的平台。

Content

.Net Framework主要包含一个非常大的代码库,可以在客户语言中通过面向对象编程技术来使用这些代码。这个库分为多个不同的模块。

.Net Framework还包含.NET公共语言运行库(Common Language Runtime, CLR),它负责管理用.NET库开发的所有应用程序的执行

Using .NET Framework

Tools

  1. Visual Studio
  2. VCE(for C#)

相关概念

FCL(Framework Class Library)

“The FCL is a set of DLL assemblies that contain several thousand type definitions in which each type exposes some functionality.”(提供了大量功能的现有DLL库)

Metadata

“There are two main types of tables: tables that describe the types and members defined in your source code and tables that describe the types and members referenced by your source code.”(包含了源代码里类型的定义,对象的索引等信息)

IL(Intermediate Language)

“IL is a CPU-independent machine language created by Microsoft after consultation with several external commercial and academic language/compiler writers.”(CPU无关的中间语言,用来抽象编译后的高阶语言,在JIT中会被编译成特定OS和目标机器架构的机器代码)

CLI(Common Language Infrastructure)

The Common Language Infrastructure (CLI) is an open specification developed by Microsoft and standardized by ISO[1] and ECMA[2] that describes executable code and a runtime environment that allow multiple high-level languages to be used on different computer platforms without being rewritten for specific architectures.(通用语言基础架构定义了可执行码以及代码的运行时环境的规范,使得高级语言编写的软件无需重新编写就可以运行在不同的计算机体系结构上)

Note:
The .NET Framework and the free and open source Mono and Portable.NET are implementations of the CLI.

CLR(Common Language Runtime)

“The common language runtime (CLR) is just what its name says it is: a runtime that is usable by different and varied programming languages. The core features of the CLR (such as memory management, assembly loading, security, exception handling, and thread synchronization) are available to any and all programming languages that target it” – 《CLR Via C# Fourth Edition - Jeffrey Richter》(公共语言运行库提供了内存管理,异常处理,线程同步等功能)

CTS(Common Type System)

“Describes how types are defined and how they behave. Defines the rules governing type inheritance, virtual methods, object lifetime, and so on.”

CLS(Common Language Specification)

“Details for compiler vendors the minimum set of features their compilers must support if these compilers are to generate types compatible with other components written by other CLS-compliant languages on top of the CLR.”(通用语言规范定义了通用语言之间类型交互的基本规范)
CTSAndCLS

Compile process

  1. CIL(Common Intermediate Language)
    首先把代码编译成通用中间语言(Common Intermediate Language, CIL)代码
    编译到程序集

Compile Source Into Managed Modules
从上面可以看出CLR支持的语言都被编译成Managed module(IL and metadata)

Managed Module

Managed Module由两部分组成:

  1. Metadata
  2. IL(Intermediate Language)
    ManagedModulesComponents
    可以看出Metadata是负责记录类型信息。
    IL是通过CLR编译后的CPU-independent的中间语言。
    CLR支持的语言都会编译成IL和Metadata存储在Managed Module里。
    但我们最终在程序里加载的不是Module而是Assebmlies。下面来看看Module和Assebmly之间的关系。
    RelationshipBetweenModulesAndAssemblies
    可以看出Assembly是由多个Module组成。
    而CLR是负责管理Assebmlies里的代码执行。
    所以才有了多种CLR支持的语言在CLR内可以互相调用。

Executing Assembly Code

  1. JIT(Just-In-Time)
    把CIL编译为专用于OS和目标机器结构的机器代码
    编译为本机代码
    e.g.
    CodeExecutionExample1
    CodeExecutionExample2
    从上面可以看出,之前生成的IL会被JIT在运行时编译成对应的本地机器代码。当同样的方法再次调用时,就不需要JIT进行IL到本地机器代码的编译,直接调用之前编译好的机器代码即可。

这里有个Unsafe code的概念值得注意。
Unsafe code is allowed to work directly with memory addresses and can manipulate bytes at these addresses.
/unsafe compiler switch to control whether allow to executing unsafe code(只有编译器开启了/unsafe标志才允许执行unsafe code(e.g. 直接操作内存地址进行修改))
PEVerify.exe可用不查看Assembly里是否有unsafe code。

程序集

编译程序时所创建的CIL代码存储在一个程序集中(e.g. .exe .dll)
程序集包含程序用到的相关数据信息(Assemblies contains all module’s Metadata and IL)

Note:
PDB(program Database) file helps the debugger find local variables and map the IL instructions to the source code.
NGen.exe tool can compiles all of an assembly’s IL code into native code and saves the resulting native code to a file on disk.(avoid compilation at run time)

托管代码

CLR管理着应用程序,其方式是管理内存,处理安全性以及允许进行跨语言调试等。相反,不受CLR控制运行着的应用程序属于非托管类型。
托管到CLR运行

Note:
“C++ is unique in that it is the only compiler that allows the developer to write both managed and unmanaged code and have it emitted into a single module.”

垃圾回收

托管代码中的一个功能GC(garbage collection)

链接

模块化

CSharp(CLR Via C#)

Introduction

C#是可用于创建要运行在.NET CLR上的应用程序的语言之一,它从C和C++语言演化而来,是Microsoft专门为使用.NET平台而创建的。

Features

  1. 语法简单
  2. 类型安全
  3. 为.NET Framework设计的语言

Development

Application Type

  1. Windows Appliaction Program
  2. Web Application Program
  3. Web Service

Language Study

Only record some difference between C# and C++/Java

delegate

delegates – type-safe
Unmanaged C/C++ callback functions are not type-safe
首先要知道的是delegate在C#里类型安全的(即有有编译时的类型检查)
C++里是不是类型安全的

在调用delegate的时候,CLR提供了当reference type绑定方法到delegate的时covariance和contra-variance的支持。
Covariance means that a method can return a type that is derived from the delegate’s return type.(reference type的方法的返回类型可以是delegate指定返回类型的子类)
Contra-variance means that a method can take a parameter that is a base of the delegate’s parameter type.(reference type的方法的参数可以是delegate指定参数类型的父类)
The reason why value types and void cannot be used for covariance and contra-variance is because the memory structure for these things varies, whereas the memory structure for reference type is always a pointer.(value type和void不支持上述功能)

接下来让我们看看Delegate背后的故事:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Collections;

namespace CSharpDeepStudy
{
#region Delegate Study
internal delegate void DelegateStudy(int value);
#endregion

class Program
{
#region Delegate Study
private static void StaticDelegateDemo(int value)
{
Console.WriteLine("StaticDelegateDemo({0})", value);
}

private void InstanceDelegateDemo(int value)
{
Console.WriteLine("InstanceDelegateDemo({0})",value);
}

private static void ChainDelegateDemo(Program p)
{
DelegateStudy cdd = null;
cdd += Program.StaticDelegateDemo;
cdd += p.InstanceDelegateDemo;
cdd.Invoke(3);
}
#endregion

static void Main(string[] args)
{
#region Delegate Study
Program p = new Program();
DelegateStudy sdd = Program.StaticDelegateDemo;
DelegateStudy idd = p.InstanceDelegateDemo;
sdd.Invoke(1);
idd.Invoke(2);
Program.ChainDelegateDemo(p);
#endregion

#region Dynamic Study
//Dynamic delegate part
MethodInfo mi = typeof(Program).GetMethod("InstanceDelegateDemo");
Delegate d = Delegate.CreateDelegate(typeof(DelegateStudy), p, mi);
d.DynamicInvoke(4);
#endregion

Console.ReadKey();
}
}
}

反编译后:
DelegateStudy
从上面可以看到,当我们定义一个delegate的时候CLR会给我们生成一个继承至System.MulticastDelegate的类,上面是DelegateStudy Class。
而MulticastDelegate是我们去累加delegate的关键。
MulticastDelegate
从上面可以看出,MulticastDelegate包含了三个关键成员:

  1. _target
    用于保存delegate的实例对象,如果是全局static的回调则为null
  2. _methodPtr
    用于识别回调方法
  3. _invocationList
    这个是用于delegate chain的关键,用于保存array of delegate objects
    下面我们看看_invocationList是如何完成delegate chain的:
    MultipleDelegatePart1
    MultipleDelegatePart2
    MultipleDelegatePart3
    MultipleDelegatePart4
    可以看出当我们chain delegate的时候_invocationList存储了delegate的指针到_invocationList里,从而实现了multiple delegate chain的效果。
    最后要讲的一点是关于反射动态创建delegate:
    通过System.Reflection.MethodInfo.CreateDelegate()方法我们可以实现动态创建delegate。
    最终上面的代码会输出如下:
    DelegateStudyOutput

(C++里用函数指针实现,Java里通过内部类的闭包和interface去实现)
最后我们还可以通过lambda表达式和匿名方法去定义delegate

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace CSharpStudy
{
class Program
{
delegate void testDelegate(string para);

static void doSomething(string para)
{
Console.WriteLine("doSomething:" + para);
}

static void Main(string[] args)
{
#region method 1 to use delegate
testDelegate delegate1;
delegate1 = new testDelegate(doSomething);
#endregion
delegate1("delegate1");

#region method 2 to use delegate(lambda expression)
testDelegate delegate2 = s =>
{
Console.WriteLine("doSomething:" + s);
};
#endregion
delegate2("delegate2");

#region method 3 to use delegate(anonymous method)
testDelegate delegate3 = delegate(string para)
{
Console.WriteLine("delegate3's para = " + para);
};
#endregion
delegate3("delegate3");
Console.ReadKey();
}
}
}

Output:
CSharp_Delegate

详细比较Delegate和函数指针,参见C# VS C++之一: 委托 vs 函数指针

这里只写最后的总结:
1.C#委托对象是真正的对象,C/C++函数指针只是函数入口地址
2.C++的委托对象:functor
3.C++的静多态:模版

Class & interface

  1. Class qualifier
    internal class – only code in current project can access (default)
    public class – other project code can access

abstract class – abstract class
sealed class – cant not be inheritated

Note:
Compiler is not allowed derived class’s access privileges higher than parent class

e.g.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
internal class MyBase
{
public MyBase()
{

}
}
//Cant do like this due to Child class access previlage is higher than parent class
public class MyChild /*: MyBase*/
{
public MyChild()
{


}
}

support extends multiple interface
Note:
base class must be write down first when we inherites from one class and extends several interface
abstract & sealed can not be used by interface due to no implementation in interface (abstract & sealed qualifier are meaningless)

  1. Static constructor & Static class
    静态构造函数只会被调用一次且属于整个类
    静态类不能拥有实例构造函数且只能有static成员
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace CSharpStudy
{
class Program
{
class StaticConstructor
{
static StaticConstructor()
{
Console.WriteLine("Static StaticConstructor()");
m_ID = 1;
}

public StaticConstructor()
{
Console.WriteLine("Normal StaticConstructor()");
}

public static int m_ID;
}

static class StaticClass
{
/*
//Cant have instance constructor (do not know why static constructor neither)
static StaticClass()
{
Console.WriteLine("StaticClass()");
m_ID = 2;
m_Type = "Static";
}
*/
//Can not non static member
//public int m_Test = 3;
public static int m_ID = 2;

public static string m_Type = "Static";
}


static void Main(string[] args)
{
StaticConstructor sc = new StaticConstructor();
Console.WriteLine("StaticConstructor::m_ID = " + StaticConstructor.m_ID);

Console.WriteLine("StaticClass::m_ID = " + StaticClass.m_ID);
Console.WriteLine("StaticClass::m_Type = " + StaticClass.m_Type);

Console.ReadKey();
}
}
}

Output:
Static_Constructor_And_Static_Class

Not supported multiple inherit

C++支持多重继承,Java和C#通过extend多个Interface来实现多重继承

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class Base1
{
public Base1()
{
Console.WriteLine("Base1()");
}
}
class Base2
{
public Base2()
{
Console.WriteLine("Base1()");
}
}
class Child : Base1/*, Base2*///Not support multiple inherit
{
public Child()
{
Console.WriteLine("Child()");
}
}

Child c = new Child();

Class member

access qualifier
public, private, internal, protected

readonly – only can be initilized in constructor or declaration

property & field
property gives more control to field access

accessor privilege – can not be higher than the access privilege that it is belonged to

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
class Accessor
{
public Accessor()
{
m_A = 0;
}

private int IntA
{
get
{
return m_A;
}
//Can not be public due to IntA's access privilege is lower than public
//public set
set
{
m_A = value;
}
}
private int m_A;
}
  1. member function
    override key word can hide base function(works with polymorphism)
    Access base function that has been hiden uses base key word in child class

  2. Interface member
    all interface member must be public
    can not use static, virtual, abstract, sealed

  3. Interface implementation
    explicit implementation – only can be accessed through interface (return type interface.functionanme(para))
    implicit implementation – can be accessd through both interface and class

  4. partial class definition & partial property
    can put class member, property, method, field into several files(partial key word)
    Partial property is always static and withought return value

Struct & Class

Struct is value type
在栈上分配内存
栈回收快速
传递的是值
转换为reference type会触发boxing引发堆上的额外内存分配
Class is reference type
在堆上分配内存
GC管理
传递的是索引
那么什么时候定义struct,什么时候定义class了?
一下来至MSDNChoosing Between Class and Struct
✓ CONSIDER defining a struct instead of a class if instances of the type are small and commonly short-lived or are commonly embedded in other objects.
如果生命周期短,并被包含在其他类里而已考虑使用struct

X AVOID defining a struct unless the type has all of the following characteristics:
It logically represents a single value, similar to primitive types (int, double, etc.).
It has an instance size under 16 bytes.
It is immutable.
It will not have to be boxed frequently.
In all other cases, you should define your types as classes.
可以看出只有在数据简单,不可变,不需要频繁boxing的时候才会选择定义struct。

Shallow copy & Deep copy
Shallow copy will only copy value type member, reference type member will use original one(System.Object.MemberwiseClone())

Deep copy will copy all member value instead of reference(implememnt ICloneable::Clone())

Collection class (System.Collection)

好比C++里的STL里的container
Our own collection (extends CollectionBase Class || DictionaryBase)

因为C#没有自带PriorityQueue而是通过基本的List等数据结构来实现,下面是自己实现PriorityQueue,主要是通过堆排序用List来模拟优先队列。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
public class PriorityQueue<T1, T2>
{
public PriorityQueue()
{
mHeap = new Heap<T1, T2>();
}

public PriorityQueue(Heap<T1, T2> heap)
{
mHeap = heap;
}

public bool Empty()
{
return (mHeap.Size() == 0);
}

public void Push(KeyValuePair<T1, T2> kvp)
{
mHeap.Insert(kvp);
}

public KeyValuePair<T1, T2> Pop()
{
KeyValuePair<T1, T2> result = mHeap.Top();
mHeap.RemoveTop();
return result;
}

public int Size()
{
return mHeap.Size();
}

public KeyValuePair<T1, T2> Top()
{
return mHeap.Top(); ;
}

public void PrintOutAllMember()
{
mHeap.PrintOutAllMember();
}

private Heap<T1, T2> mHeap;
}

public class Heap<T1, T2>
{
private List<KeyValuePair<T1, T2>> mList;
private IComparer<T2> mComparer;
private int mCount;

public Heap()
{
mList = new List<KeyValuePair<T1, T2>>();
mComparer = Comparer<T2>.Default;
mCount = 0;
}

public Heap(List<KeyValuePair<T1, T2>> list)
{
mList = list;
mCount = list.Count;
mComparer = Comparer<T2>.Default;
BuildingHeap();
}

public int Size()
{
if (mList != null)
{
return mCount;
}
else
{
return 0;
}
}

//O(Log(N))
public void RemoveTop()
{
if (mList != null)
{
mList[0] = mList[mCount - 1];
mList.RemoveAt(mCount - 1);
mCount--;
HeapifyFromBeginningToEnd(0, mCount - 1);
}
}

public KeyValuePair<T1, T2> Top()
{
if (mList != null)
{
return mList[0];
}
else
{
//No more member
throw new InvalidOperationException("Empty heap.");
}
}

public void PrintOutAllMember()
{
foreach (KeyValuePair<T1, T2> valuepair in mList)
{
Console.WriteLine(valuepair.ToString());
}
}

//O(Log(N))
public void Insert(KeyValuePair<T1, T2> valuepair)
{
mList.Add(valuepair);
mCount++;
HeapifyFromEndToBeginning(mCount - 1);
}

//调整堆确保堆是最大堆,这里花O(log(n)),跟堆的深度有关
private void HeapifyFromBeginningToEnd(int parentindex, int length)
{
int max_index = parentindex;
int left_child_index = parentindex * 2 + 1;
int right_child_index = parentindex * 2 + 2;

//Chose biggest one between parent and left&right child
if (left_child_index < length && mComparer.Compare(mList[left_child_index].Value, mList[max_index].Value) < 0)
{
max_index = left_child_index;
}

if (right_child_index < length && mComparer.Compare(mList[right_child_index].Value, mList[max_index].Value) < 0)
{
max_index = right_child_index;
}

//If any child is bigger than parent,
//then we swap it and do adjust for child again to make sure meet max heap definition
if (max_index != parentindex)
{
Swap(max_index, parentindex);
HeapifyFromBeginningToEnd(max_index, length);
}
}

//O(log(N))
private void HeapifyFromEndToBeginning(int index)
{
if (index >= mCount)
{
return;
}
while (index > 0)
{
int parentindex = (index - 1) / 2;
if (mComparer.Compare(mList[parentindex].Value, mList[index].Value) > 0)
{
Swap(parentindex, index);
index = parentindex;
}
else
{
break;
}
}
}

//通过初试数据构建最大堆
////O(N*Log(N))
private void BuildingHeap()
{
if (mList != null)
{
for (int i = mList.Count / 2 - 1; i >= 0; i--)
{
//1.2 Adjust heap
//Make sure meet max heap definition
//Max Heap definition:
// (k(i) >= k(2i) && k(i) >= k(2i+1)) (1 <= i <= n/2)
HeapifyFromBeginningToEnd(i, mList.Count);
}
}
}

////O(N*log(N))
private void HeapSort()
{
if (mList != null)
{
//Steps:
// 1. Build heap
// 1.1 Init heap
// 1.2 Adjust heap
// 2. Sort heap

//1. Build max heap
// 1.1 Init heap
//Assume we construct max heap
BuildingHeap();
//2. Sort heap
//这里花O(n),跟数据数量有关
for (int i = mList.Count - 1; i > 0; i--)
{
//swap first element and last element
//do adjust heap process again to make sure the new array are still max heap
Swap(i, 0);
//Due to we already building max heap before,
//so we just need to adjust for index 0 after we swap first and last element
HeapifyFromBeginningToEnd(0, i);
}
}
else
{
Console.Write("mList == null");
}
}

private void Swap(int id1, int id2)
{
KeyValuePair<T1, T2> temp;
temp = mList[id1];
mList[id1] = mList[id2];
mList[id2] = temp;
}
}

static void Main(string[] args)
{
List<KeyValuePair<int, float>> list = new List<KeyValuePair<int, float>>();
list.Add(new KeyValuePair<int, float>(3, 1.0f));
list.Add(new KeyValuePair<int, float>(2, 5.0f));
list.Add(new KeyValuePair<int, float>(1, 3.0f));
list.Add(new KeyValuePair<int, float>(6, 4.0f));
list.Add(new KeyValuePair<int, float>(5, 2.0f));
list.Add(new KeyValuePair<int, float>(4, 6.0f));

Heap<int,float> heap = new Heap<int,float>(list);

PriorityQueue<int,float> pq = new PriorityQueue<int,float>(heap);

pq.PrintOutAllMember();

Console.WriteLine("------------------------pq.Push(new KeyValuePair<int, float>(0, 0.0f));");

pq.Push(new KeyValuePair<int, float>(0, 0.0f));

pq.PrintOutAllMember();

Console.WriteLine("------------------------pq.Pop();");

pq.Pop();

pq.PrintOutAllMember();

Console.WriteLine("------------------------pq.Top();");

Console.WriteLine(pq.Top().ToString());

#endregion

Console.ReadKey();
}

Output:
PriorityQueue_Study

堆排序构造有序堆的时间复杂度是O(N * Log(N))
但插入和移除操作都是O(Log(N))

排序算法参考

Method

关于Method这里主要讲两点:

  1. Extension Methods
    “It allows you to define a static method that you can invoke using instance method syntax.”(Extension Methods最主要的好处就在于当你无法给特定类或结构定义方法的时候,你可以通过定义extension method来为该类或结构添加方法,使用的时候就跟在类里定义方法一样,通过实例就能调用。)
    定义Extension Method首先必须定义在一个静态类里,且方法为静态方法,并且第一个参数类型前要加this关键词,this后面跟的就是我们要extension的类。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
public static class StringBuilderExtensions
{
public static Int32 Indexof(this StringBuilder sb, Char value)
{
for (Int32 index = 0; index < sb.Length; index++)
{
if (sb[index] == value)
{
return index;
}
}
return -1;
}
}

static void Main(string[] args)
{
StringBuilder sb = new StringBuilder("Hello. My name is Tony.");
Int32 index = sb.Indexof('T');
Console.WriteLine("sb.Indexof('T') = " + index);
}

Output:
ExtensionMethods
调用Extension method跟Compiler如何去寻找方法编译有关,具体见《CLR via C#》 – Methods的Extension Methods章节
定义Extension Methods需要注意一下几点:
1. 只能定义在非模板静态类里。
2. 定义Extension Methods的类必须位于file scope(不能被其他类包含)
3. 必须包含添加Extension Methods的类的namespace(为了避免检查所有文件去找extension methods)
4. Extension Methods应该少用,会造成versioning problem(未来可能添加相同方法到类里,不同版本的调用会出现不同的表现)。
2. Partial Methods
首先得知道我们为什么需要Partial Methods?
1. Override去重写virtual方法的时候要求父类不能是Sealed,不能为sealed class或value type重写方法
2. 无需为了重写个别方法而单独定义一个类
使用Partial Methods在partial class里定义partial方法前加partial关键字

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
internal sealed partial class Base
{
public String Name
{
get
{
return mName;
}
set
{
OnNameChanging(value.ToUpper());
mName = value;
}
}
private String mName;

//This defining-partial-method-declaration is called before changing the mName field
partial void OnNameChanging(String value);
}

internal sealed partial class Base
{
partial void OnNameChanging(string value)
{
Console.WriteLine("Base::OnNameChanging({0})", value);
}
}

static void Main(string[] args)
{
Base bs = new Base();
bs.Name = "Tony";
}

Output:
PartialMethods
使用Partial Methods需要注意以下几点:
1. 只能在Partial class or Struct里定义
2. Partial Method必须返回void,并且不能有parameter有out关键词修饰
3. Partial Method必须和原方法签名一样
4. 如果Partial Method没有实现,Delegate不能指向该partial method
5. Partial Methods永远是private的

Comparasion

  1. Type comparasion
    System.Object.GetType()
    &&
    typeof()
    &&
    is operator – is specific type or type can be cast

    1. boxing – cast value type into System.Object type(shollow copy) or interface type that is implemented by value type
    2. unboxing – boxing reverse process
  2. Value comparasion
    operator overloading – must be static
    IComparable – compare object’s data with the same type
    &&
    IComparer – compare two object with different type or the same type

Conversion

Conversion operator
implici
explicit
e.g.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class A
{
public static implicit operator B(A a)
{
......
}
}

class B
{
public static explicit operator A(B b)
{
......
}
}

as operator
as
Suit case

  1. operand type is type
  2. operand type can be casted into type implicitly
  3. operand can be boxing into type

Generics

C++里是template实现
System.Collections.Generic
value type can not be initilized with null
Problem

  1. null (value type or reference type)
    1. default key word – if it is reference type, initilized with null. otherwise use default value
  2. type
    1. constraining
      where key words
      e.g.
      class A: where T:B 9( T must inherite from B)

Code e.g.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
   using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Collections;

namespace CSharpStudy
{
class Program
{
//Can not instantiation with int when where T1 : class
//public class GenericClass<T1> where T1 : class
public class GenericClass<T1> : IEnumerable<T1> where T1 : struct
{
private List<T1> m_Member = new List<T1>();

public GenericClass()
{
//use T1's default value
//m_Member.Add(default(T1));
}
public List<T1> GetMember
{
get
{
return m_Member;
}
}
public IEnumerator<T1> GetEnumerator()
{
return m_Member.GetEnumerator();
}

IEnumerator IEnumerable.GetEnumerator()
{
return m_Member.GetEnumerator();
}

public static implicit operator List<T1>(GenericClass<T1> gc)
{
List<T1> result = new List<T1>();
foreach (T1 i in gc)
{
result.Add(i);
}
return result;
}

public static GenericClass<T1> operator +(GenericClass<T1> gc, List<T1> l)
{
GenericClass<T1> result = new GenericClass<T1>();
foreach (T1 m in gc)
{
result.GetMember.Add(m);
}
foreach (T1 m in l)
{
if (!result.GetMember.Contains(m))
{
result.GetMember.Add(m);
}
}
return result;
}
};

static void Main(string[] args)
{
//Nullable problem,
//value type can not be initiated with null
//int normalint = null;
System.Nullable<int> nullableint = null;
//nullable
int? nullalbleint = null;
int? result = nullalbleint ?? 5;
Console.WriteLine("result = " + result);
List<int> list = new List<int>(2);
list.Add(1);
list.Add(2);
foreach (int i in list)
{
Console.WriteLine("list value = " + i);
}

GenericClass<int> gc = new GenericClass<int>();
Console.WriteLine("gc.m_Member = " + gc.GetMember);

GenericClass<int> gc2 = new GenericClass<int>();
gc2.GetMember.Add(11);
gc2.GetMember.Add(111);
GenericClass<int> gc3 = new GenericClass<int>();
gc3.GetMember.Add(11);
gc3.GetMember.Add(22);
gc3.GetMember.Add(222);

gc = gc2 + gc3;
foreach (int i in gc)
{
Console.WriteLine("gc member = " + i);
}

Console.ReadKey();
}
}
}

Output:
Generic

Variance (变体)

  1. Convariance – 协变 out key word
    主要用于Interface和delegate的返回类型或者参数类型的隐士转换(子类到父类)
  2. Contravariance – 抗变 int key word
    与协变相反(父类到子类)
    Code e.g.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
//Covariance
static void ListAnimals(IEnumerable<Animal> animals)
{
foreach (Animal animal in animals)
{
Console.WriteLine(animal.ToString());
}
}

static void FeeAnimal(Func<Animal> animalCreator)
{
var animal = animalCreator();
Console.WriteLine("animal.name = " + animal.Name);
}

static void FeeAnimal(Func<Cow> animalCreator)
{
var animal = animalCreator();
Console.WriteLine("animal.name = " + animal.Name);
}

static Cow CreateCow()
{
return new Cow("DelegateCow");
}

//Contravariance
static void FeeAnimal(Animal animal)
{
Console.WriteLine("FeeAnimal:" + animal.Name);
}

static void Execute(Action<Cow> cact)
{
cact(new Cow("ExecuteCow"));
}

//Covariance
List<Cow> cows = new List<Cow>();
cows.Add(new Cow("Cow1"));
ListAnimals(cows);

FeeAnimal(CreateCow);
Func<Cow> cFunc = CreateCow;
Func<Animal> aFunc = cFunc;

//Contravariance
Action<Animal> aAct = FeeAnimal;
Action<Cow> cAct = aAct;

Execute(aAct);

Output:
Variance_Contravariance

Note:
C++是通过编译器检测出模板使用的特定类型
C#是运行时进行

Hosting, AppDomain, Assembly, Reflection

这一章节主要是学习关于Assembly Loading和Reflection技术。
在学习Assembly Loading和Reflection之前,我们需要了解Hosting,AppDomain的概念。

Hosting

以下英文内容来至《CLR via C#》
Hosting allows any application to use the features of the common language runtime(CLR). Furthermore, hosting allows applications the ability to offer customization and extensibility via programming.
Extensibility means that third-party code will be running inside your process.

The hosting application can call methods defined by ICLRMetaHost interface to:

  1. Set Host managers. Tell the CLR that the host wants to be involved in making decisions related to memory allocations, thread scheduling/synchronization, assembly loading, and more. The host can also state that it wants notifications of garbage collection starts and stops and when certain operations time out.
  2. Get CLR managers. Tell the CLR to prevent the use of some classes/members. In addition, the host can tell which code can and can’t be debugged and which methods in the host should be called when a special event—such as an AppDomain unload, CLR stop, or stack overflow exception—occurs.
  3. Initialize and start the CLR.
  4. Load an assembly and execute code in it.
  5. Stop the CLR, thus preventing any more managed code from running in the Windows process.

Hosting(allows any application to offer CLR features) Benifits:

  1. Programming can be done in any programming language.
  2. Code is just-in-time (JIT)–compiled for speed (versus being interpreted).
  3. Code uses garbage collection to avoid memory leaks and corruption.
  4. Code runs in a secure sandbox.
  5. The host doesn’t need to worry about providing a rich development environment. The
    host makes use of existing technologies: languages, compilers, editors, debuggers, profilers, and more.
    从上面所有内容可以看出Hosting可以让我们去利用CLR的特性,我们而已通过Host去设定很多CLR相关的设定(比如GC,Memory Manager……),初始化CLR,创建出默认的AppDomain,通过CLR去加载Assemly到AppDomain然后执行。

AppDomain

AppDomain allows third-party untrusted code to run in an existing proceess, and the CLR guarantees that the data structures, code, and security context will not be exploited or compromised.(AppDomain允许不可信的代码在当前进程执行,CLR会去确保数据结构,代码等安全问题)
AppDomain和CLR的关系:
“AppDomains are a CLR feature.”

“When the CLR COM server initializes, it creates an AppDomain. An AppDomain is a logical container for a set of assemblies. The first AppDomain created when the CLR is initialized is called the default AppDomain; this AppDomain is destroyed only when the Windows process terminates.”(AppDomain是一个assemblies集合的容器,当CLR初始化的时候会创建默认的AppDOmain,这个AppDomain只能在程序结束的时候被终止)

The whole purpose of an AppDomain is to provide isolation. Here are the specific features offered by an AppDomain(AppDomain的主要目的是为了实现程序隔离):

  1. Objects created by code in one AppDomain cannot be accessed directly by code in another AppDomain When(确保不同AppDomain里的Object不会被其他AppDomain访问)
  2. AppDomains can be unloaded(AppDomain可以被unload)
  3. AppDomains can be individually secured(通过设定AppDomain的permission用于确保assembly的一些权限)
  4. AppDomains can be individually configured(设置AppDomian的配置,影响如何去加载Assemlies等)

这里提到AppDomain的程序隔离功能,那就不得不说一下Process了。
“Process isolation prevents security holes, data corruption, and other
unpredictable behaviors from occurring, making Windows and the applications running
on it robust.”
这里Process可以理解为进程面上的程序隔离,而AppDomain可以理解为进程内的程序隔离(一个进程可以创建多个AppDomain)
让我们看一下程序是如何在Process,AppDomain还有CLR下工作的:
CLRAppDomainProcessRelationship
可以看出一个Process下创建了多个AppDomain,每一个AppDomain加载了特定的Assembly,每一个AppDomain有自己的LoaderHeap,每一个LoaderHeap记录了该AppDomain所访问过的type,当调用type的method的时候,IL code会被JIT运行时编译到对应的机器代码执行。
普通的AppDomain之间的Assembly是完全隔离的,所以就算多个AppDomain引用了同一个Assembly,他们也不会共享数据和内存。
但上图有一个比较特殊的AppDomain,叫做Domain-Neutrl Assemblies。
这个Domain的主要目的是共享一些通用的Assemblys,加载在这个AppDomian下的Assemblys可以被所有的AppDomains访问。
虽然Assembly在AppDomain之间是完全隔离的,但不同AppDomain创建的objects还是可以相互访问的。
让我们看看不同AppDomain创建的objeccts如何相互访问的:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Collections;
using System.Reflection;

using System.Runtime.InteropServices;
using System.Threading;
using System.Runtime;
using System.Runtime.Remoting;

namespace CSharpDeepStudy
{
class Program
{
#region Hosting and AppDomain Study
// Instances can be marshaled-by-reference across AppDomain boundaries
[Serializable]
public sealed class MarshalByRefType : MarshalByRefObject
{
public MarshalByRefType()
{
Console.WriteLine("{0} ctor running in {1}", this.GetType().ToString(), Thread.GetDomain().FriendlyName);
}

public void SomeMethod()
{
Console.WriteLine("Executing in " + Thread.GetDomain().FriendlyName);
}

public MarshalByValType MethodWithReturn()
{
Console.WriteLine("Executing in " + Thread.GetDomain().FriendlyName);
MarshalByValType t = new MarshalByValType();
return t;
}

public NonMarshalableType MethodArgAndReturn(String callingdomainname)
{
Console.WriteLine("Calling from {0} to {1}", callingdomainname, Thread.GetDomain().FriendlyName);
NonMarshalableType t = new NonMarshalableType();
return t;
}
}

// Instances can be marshaled-by-value across AppDomain boundaries
[Serializable]
public sealed class MarshalByValType : Object
{
private DateTime m_CreationTime = DateTime.Now;

public MarshalByValType()
{
Console.WriteLine("{0} ctor running in {1}, Created on {2:D}", this.GetType().ToString(), Thread.GetDomain().FriendlyName, m_CreationTime);
}

public override String ToString()
{
return m_CreationTime.ToLongDateString();
}
}

// Instances cannot be marshaled across AppDomain boundaries
// [Serializable]
public sealed class NonMarshalableType : Object
{
public NonMarshalableType()
{
Console.WriteLine("Excuting in " + Thread.GetDomain().FriendlyName);
}
}
#endregion

private static void Marshalling()
{
//Obtain current thread AppDomain
AppDomain currentthreadappdomian = Thread.GetDomain();
String callingdomainname = currentthreadappdomian.FriendlyName;
Console.WriteLine("currentthreadappdomian.name = " + callingdomainname);

//Get the assembly that contains the main method
Assembly mainassembly = Assembly.GetEntryAssembly();
String exeassembly = mainassembly.FullName;
Console.WriteLine("Assembly's that contains main method name is " + exeassembly);

//Accessing Objects Across AppDomain Boundaries
//Cross-AppDomain Communication using marshal-by-reference
AppDomain ad2 = null;
ad2 = AppDomain.CreateDomain("AD2", null, null);
MarshalByRefType mbrt = null;
mbrt = (MarshalByRefType)ad2.CreateInstanceAndUnwrap(exeassembly, typeof(MarshalByRefType).FullName);
Console.WriteLine("Type = {0}", mbrt.GetType());
//Prove that we got a reference to a proxy object
Console.WriteLine("Is proxy = {0}", RemotingServices.IsTransparentProxy(mbrt));
//Call method in the AppDomain owning the objects
mbrt.SomeMethod();
//Unload the new AppDomian
AppDomain.Unload(ad2);
//try access mbrt after we unload AppDomain it owned
try
{
mbrt.SomeMethod();
Console.WriteLine("Successful call SomeMehtod()");
}
catch (AppDomainUnloadedException)
{
Console.WriteLine("Failed call SomeMethod()");
}

//Cross-AppDomain Communication using Marshal-by-value
//Create new AppDomain
ad2 = AppDomain.CreateDomain("AD3", null, null);
mbrt = (MarshalByRefType)ad2.CreateInstanceAndUnwrap(exeassembly, typeof(MarshalByRefType).FullName);

MarshalByValType mbvt = mbrt.MethodWithReturn();
//Prove that we did NOT get a reference to a proxy object
Console.WriteLine("Is Proxy={0}", RemotingServices.IsTransparentProxy(mbvt));
//Try call method on real object
Console.WriteLine("Returned object created " + mbvt.ToString());
//Unload AppDomain again
AppDomain.Unload(ad2);
//try access method on real object again
try
{
Console.WriteLine("Returned object created " + mbvt.ToString());
Console.WriteLine("Successful call.");
}
catch (AppDomainUnloadedException)
{
Console.WriteLine("Failed call.");
}

//Cross-AppDomain Communication Using non-marshalable type
ad2 = AppDomain.CreateDomain("AD4", null, null);
//Load assembly into the new AppDoamin
mbrt = (MarshalByRefType)ad2.CreateInstanceAndUnwrap(exeassembly, typeof(MarshalByRefType).FullName);

//call the object method to get non-marshalable object
try
{
NonMarshalableType nmt = mbrt.MethodArgAndReturn(callingdomainname);
}
catch (System.Exception e)
{
Console.WriteLine(e.ToString());
}
}

static void Main(string[] args)
{
#region Hosting and AppDomain Study
Marshalling();
#endregion

Console.ReadKey();
}
}
}

CrossAppDomainCommunicationOutPut
上面的测试主要是针对下面三种情况:

  1. Cross-AppDomain Communication Using Marshal-by-Reference
    从上面可以看出当我们Marshal-by-Reference between AppDomain的时候,我们需要继承至MarshalByRefObject。(通过RemotingServices.IsTransparentProxy检查是否是Proxy)
    底层是通过在Destination AppDomain生成的Proxy Type信息,其中还生成了instane fields去记录了哪一个AppDomain真正拥有这个type,如何在该AppDomain下找到这个real object去实现Reference的。
    这样就说得通当我们关掉创建Prox Type的AppDomain后,再次通过Prox Type调用就无法通过了,因为通过调用AppDomain.Unload(),所有在该AppDomain里的assemblies和通过assemblies里的信息创建的对象都被释放回收了。
    Note:
    “although you can access fields of a type derived from MarshalByRefObject, the performance is particularly bad because the CLR really ends up calling methods to perform the field access.”
  2. Cross-AppDomain Communication Using Marshal-by-Value
    当我们Marshal-by-Value时不需要继承至MarshalByRefObject,但需要确保MarshalByValType是[Serializable]的。
    因为底层实现是通过序列化和反序列化实现Destination AppDomain加载并生成对应type信息。
  3. Cross-AppDomain Communication Using Non-Marshalable Types
    最后一个是因为我们采用Marshal-by-Value但却没有把NonMarshalableType设置成[Serializable]导致在Serialize NonMarshalableType到Destination AppDomain的时候抛异常。
    针对AppDomain问题我没有深入学习,如有不对之处欢迎指出,详情请参考《CLR via C#》
    Hosting,CLR,AppDomain,Process,Assemly关系作用总结:
    Hosting使我们可以去利用CLR的特性,通过Host可以设定很多CLR相关的设定(比如GC,Memory Manager……)。
    当CLR初始化完成后,会创建出默认的AppDomain。
    通过CLR去加载Assemly到AppDomain然后执行。
    一个Process可以有多个AppDomian。
    每个AppDomain有自己的Loader Heap去记录加载到AppDomain里的Type信息。
    当调用Type的method的时候,IL code会被JIT运行时编译到对应的机器代码执行。
    普通的AppDomain之间的Assembly是完全隔离的,所以就算多个AppDomain引用了同一个Assembly,他们也不会共享数据和内存。
    但加载在这个Domain-Neutrl Assemblies AppDomian下的Assemblys可以被所有的AppDomains访问。
    虽然Assembly在AppDomain之间是完全隔离的,但不同AppDomain创建的objects还是可以通过Marshal-by-Value和Marshal-by-Reference方式相互访问的。
    关于AppDomain的更多内容参考《CLR via C#》 – CLR Hosting and AppDomains章节(e.g. AppDomain Monitoring, How Hosts Use AppDomians……)
    Note:
    在Windows上默认的AppDomain的名字是是***.exe(执行的程序)

了解了AppDomain的基本概念,接下来让我们看看关于Assembly Loading:

Assembly Loading

Assembly是一个包含类型信息,方法信息,成员信息,程序名称,版本号,自我描述,文件关联关系和文件位置等信息的一个集合。
System.Reflection.Assembly.Load – 加载Assembly到AppDomain。(相比System.AppDomain.Load, Prefer use System.Reflection.Assembly)
System.Reflection.Assembly.LoadFrom – 加载指定路径的Assembly到AppDomain,这里也可以指定URL
System.Reflection.Assembly.ReflectionOnlyLoad or ReflectionOnlyLoadFrom – 确保只加载Assembly不会去执行里面的任何代码(只用于获取Assembly里的一些相关信息)。用这两个方法需要注册AppDomain’s ReflectionOnlyAssemblyResolve
event去手动加载索引的assemblies
既然我们知道了如何加载Assembly,也知道了Assembly包含了我们程序去创建实例所需要的所有信息,那么我们如何动态的使用Assembly里的信息去创建实例了,答案是反射。
System.Reflection给我们提供了很多方法可以去访问Assembly里的fields,methods,properties等信息。
通过这些信息,我们可以制作像ILDasm.exe这样的反编译工具,因为通过有些类我们可以得到方法的IL指令数据。
这也就是我们为什么能通过System.Reflection.Emit去动态创建类的原因(使用IL指令反向构建方法,类等)。参见AOT & JIT
那么接下来让我们看看Reflection:

Reflection

这里不得不提一个Reflection使用的典型案列,那就是Serialization,序列化反序列是通过reflection去获取类型信息去存储和构建实例的。
具体关于Serialization后续后讲到。
反射最强大的地方就在于可以在运行时动态创建和使用一些类型,但这些类型我们在编译时期是不可知的。
但缺点如下:

  1. Reflection prevents type safety at compile time(因为是运行时才动态创建,所以编译时期就无法确保类型安全)
  2. Reflection is slow(反射很慢,因为我们需要在运行时动态去获取类型信息去动态创建)
    反射慢的主要原因就在于运行时去访问获取类型信息,所以我们应该尽可能的使用在编译时期就能知道类型信息的方式。
    比如:
    通过实现一个继承至父类或接口的类,通过多态的方式去调用方法。(编译时期就知道该调用哪个方法)
    那么接下来让我们看看如何使用反射去访问类型信息并调用里面的方法。
    首先我们看看如何去获取Assembly里的一些类型信息:
    首先我们创建一个只包含了几个类信息的dll
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
using System;
......

namespace CSharpDLL
{
public class Program
{
public class CSharpDLLPublicClass1
{
......
}

public class CSharpDLLPublicClass2
{
......
}

sealed class CSharpDLLSealedClass
{
......
}

static void Main(string[] args)
{
}
}
}

然后通过Assembly.Load去加载并查看里面的Type信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
private static void LoadAssemAndShowPublicTypes(string assemblename)
{
Assembly a = Assembly.LoadFrom(assemblename);
foreach (Type t in a.GetExportedTypes())
{
Console.WriteLine(t.FullName);
}
}

static void Main(string[] args)
{
LoadAssemAndShowPublicTypes("CSharpDLL.dll");

Console.ReadKey();
}

ExportPublicAssemblyType
可以看出除了sealed的CSharpDLLSealedClass都成功打印出来了。
那么这里就有个疑问了,Type是个什么类型?
“Represents type declarations: class types, interface types, array types……A System. Type object represents a type reference”
A TypeInfo instance contains the definition for a Type, and a Type now contains only reference data.
可以看出TypeInfo包含类型定义,Type只存储类型定义的索引。
我么可以通过以下方法获取Type:

  1. System.Type.GetType
  2. System.Type.ReflectionOnlyGetType – 只能通过reflect调用里面的方法
  3. System.Reflection.TypeInfo.GetDeclaredNestedType
  4. System.Reflection.Assemly.GetType or ExportedTypes or DefinedTypes
  5. typeof operator – early-bound
    TypeInfo包含了类型的大量信息,我们可以通过System.Reflection.TrospectionExtensions的GetTypeInfo去转换Type到TypeInfo(TypeInfo在.NET 4.5才开始支持),然后通过TypeInfo去获取Type的相关信息。
    也可以通过调用AsType把TypeInfo转回Type。

现在我们得到了Type,我们而已通过一下方法使用Type去构建一个实例对象:

  1. System.Activator.CreateInstance
  2. System.Activator.CreateInstanceFrom
  3. System.AppDomain.CreateInstance or ……
  4. System.Reflection.ConstructorInfo.Invoke
    上述方法不能用于创建array和delegate:
    创建array使用Array.CreateInstance
    创建delegate使用MethodInfo.CreateDelegate
    当创建泛型实例的时候,我们需要先调用Type.MakeGenericType去设置泛型类的T参数,然后返回的Type是泛型类的Type了,然后通过前面讲到的方法就能创建出泛型类实例了。
    一下是创建一个泛型类实例的过程:
1
2
3
4
Type closedtype = opentype.MakeGenericType(typeof(string), typeof(int));
Object o = Activator.CreateInstance(closedtype);

Console.WriteLine(o.GetType());

CreateGenericInstance
知道了如何通过Type创建实例对象,接下来让我们看看如何通过反射去访问Type里的所有信息:
在开始之前让我们先来看看Reflection里的类是如何应对到Type里的各个信息里的(e.g. Method, Field, Property, Event……)
ClassHierchyOfReflection
MemberInfo代表了Type里的所有成员信息,FieldInfo,PropertyInfo,EventInfo,MethodBase等都分别对应了类定义里的成员,属性,事件,方法等信息
接下来让我们修改一下之前定义的CSharpDLL.dll里的代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
using System;

namespace CSharpDLL
{
public class Program
{
public class CSharpDLLPublicClass1
{
public CSharpDLLPublicClass1()
{
mPublicClass1ID = 0;
}

public void CSharpDLLPublicClass1Method()
{
Console.WriteLine("CSharpDLLPublicClass1Method() called");
}

public int PublicCLass1ID
{
get
{
return mPublicClass1ID;
}
set
{
mPublicClass1ID = value;
}
}
private int mPublicClass1ID;
}

static void Main(string[] args)
{
}
}
}

然后通过Reflection里的方法,把CSharpDLL.dll里的所有public的类型定义信息打印出来
因为Type.GetTypeInfo()在.NET 4.5才开始支持,所以这里我通过Type.GetMembers()去访问public的成员信息并打印而非所有的成员信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
private static void PritAllTypeInfoInAssembly(string assemblename)
{
Assembly a = Assembly.LoadFrom(assemblename);
Console.WriteLine(string.Format("{0}.Fullname = {1}",assemblename,a.FullName));
foreach (Type t in a.GetExportedTypes())
{
Console.WriteLine(string.Format("Type = {0}", t));
foreach (MemberInfo mi in t.GetMembers())
{
String typename = String.Empty;
if (mi is Type)
{
typename = "Type";
}
else if (mi is FieldInfo)
{
typename = "FieldInfo";
}
else if (mi is MethodInfo)
{
typename = "MethodInfo";
}
else if (mi is ConstructorInfo)
{
typename = "ConstructorInfo";
}
else if (mi is PropertyInfo)
{
typename = "PropertyInfo";
}
else if (mi is EventInfo)
{
typename = "EventInfo";
}
Console.WriteLine(string.Format("{0} : {1}", typename, mi.ToString()));
}
}
}

static void Main(string[] args)
{
PritAllTypeInfoInAssembly("CSharpDLL.dll");
}

PrintOutPublicMemberInfo
这样一来就打印出了所有public的MemberInfo
下面是Reflection访问程序信息的层次结构图:
ReflectionClassHierarchical
既然能够访问特定的类型信息了,那么通过reflection去访问调用就易如反掌了:
我们只需通过构造函数构建一个实例,然后通过Invoke方法传递实例对象就能调用对应方法了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
private static void ReflectionInvoke(string assemblename, string classname, string methodname)
{
Assembly a = Assembly.LoadFrom(assemblename);
Console.WriteLine(string.Format("{0}.Fullname = {1}",assemblename,a.FullName));
foreach (Type t in a.GetExportedTypes())
{
if (t is Type)
{
if (t.Name == classname)
{
ConstructorInfo constructor = t.GetConstructor(Type.EmptyTypes);
object instance = constructor.Invoke(new object[] { });
MethodInfo methods = t.GetMethod(methodname);
methods.Invoke(instance, new object[]{});
}
}
}

}
static void Main(string[] args)
{
ReflectionInvoke("CSharpDLL.dll", "CSharpDLLPublicClass1","CSharpDLLPublicClass1Method");
}

ReflectionMethodInvoke
这样一来我们就实现了动态加载Assemble,然后通过反射调用里面特定类的特定方法。
关于反射使用Event并动态创建Delegate参考《CLR vir C#》 – Assembly Loading and Reflection章节

注意下面讲到的内容和书上的测试结果不一致,暂时不知道为什么。结论对错暂时不予置评。
如果我们要频繁的通过反射去访问特定类里的方法和成员,我们会采用存储Type,MemberInfo-derived Object到collection的方式,然后再通过collection去访问。
“Type and MemberInfo-derived objects require a lot of memory.”
但Type,MemberInfo及子类存储了大量的类型信息,会耗费大量的内存。
如何解决这个问题了?
“Developers who are saving/caching a lot of Type and MemberInfoderived
objects can reduce their working set by using run-time handles instead of objects.”
通过存储run-time handles而非Type,MemberInfo object本身可以节约大量内存,然后通过run-time handles转换到对应Type/MemberInfo去访问类型信息。

  1. RuntimeTypeHandle
  2. RuntimeFieldHandle
  3. RuntimeMethodHandle
    “All of these types are value types that contain just one field, an IntPtr. The IntPtr field is a handle that refers to a type, field, or method in an AppDomain’s loader heap.”
    Run-time handles只包含一个成员,那就是IntPtr,这里存储的相当于类型信息的索引或指针。而真正的类型信息是存储在AppDomain的Loader heap上。
    所以我们只需要通过去构造run-time handles指向AppDomain loader heap上特定type,field,method,然后通过转换handle到对应Type/MemberInfo去访问类型信息就能避免存储大量的Type,MemberInfo对象,从而达到节约内存的目的。
    那么我们来看看如何通过run-time handle到底能节约多少内存?如何通过转换run-time handle去访问类型信息:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
private static void ShowHeapMemoryUsing(string postfix)
{
Console.WriteLine(string.Format("Heap Memory Size = {0} -- {1}",GC.GetTotalMemory(true), postfix));
}

private static void RuntimeTypeHandleAccessObjectTypeInfo()
{
ShowHeapMemoryUsing("Before do anything!");

List<MethodBase> methodinfos = new List<MethodBase>();
foreach (Type t in typeof(Object).Assembly.GetExportedTypes())
{
//skip over any generic types
if(t.IsGenericTypeDefinition) continue;

MethodBase[] mb = t.GetMethods();
methodinfos.AddRange(mb);
}

Console.WriteLine(string.Format("Methods Number in Object : {0}",methodinfos.Count));

ShowHeapMemoryUsing("After building cache of MethodInfo objects!");

//Build cache of RuntimeMethodHandles for all MethodInfo objects in class.name = classname
List<RuntimeMethodHandle> methodhandles = methodinfos.ConvertAll<RuntimeMethodHandle>(mb => mb.MethodHandle);

ShowHeapMemoryUsing("Holding MethodInfo and RuntimeMethodHandle cache!");

//Prevent cache from being GC'd early
GC.KeepAlive(methodinfos);

//Allow cache from being GC's now
methodinfos = null;

ShowHeapMemoryUsing("After freeing MethodInfo Objects!");

//Obtain methodinfos from methodhandle
methodinfos = methodhandles.ConvertAll<MethodBase>(rmh => MethodBase.GetMethodFromHandle(rmh));

ShowHeapMemoryUsing("Size of heap after re-creating MethodInfo objects!");

GC.KeepAlive(methodhandles);
GC.KeepAlive(methodinfos);

//Alow cache to be GC'd now
methodhandles = null;
methodinfos = null;

ShowHeapMemoryUsing("After freeing MethodInfos and RuntimeMethodHandles!");
}

RuntimeHandlesUsing
上述测试结果和《CLR via C#》测试结果完全不一致:
RuntimeHandlesTestInBook
对于释放之后methodinfos后,内存没有减少,重新转换run-time handle到
当我们得到run-time handles后,我们可以通过转换run-time handle到MethodBase后反倒内存使用减少。(对于run-time handles是否能够减少内存使用,这里表示疑问)

1
2
3
4
5
Object objinstance = typeof(Object).GetConstructor(new Type[] { }).Invoke(new Object[]{});

MethodBase migethashcode = methodinfos.Find( mi=> mi.Name == "GetHashCode");

Console.WriteLine(string.Format("objinstance.GetHashCode = {0}",migethashcode.Invoke(objinstance, new Object[]{})));

Note:
“The CLR doesn’t support the ability to unload individual assemblies.you want to unload an assembly, you must unload the entire AppDomain that contains it.”(CLR不支持unload单独的assembly,如果需要unload assembly只能通过unload加载了该assembly的AppDomian来实现)
“avoid using reflection to access a field or invoke a method property.”(
尽量避免使用反射去调用方法和访问属性成员,因为很慢)

在学习了Hosting,AppDomain,Assembly,Reflection相关知识后,让我们看看Serialization是如何实现的。

Runtime Serialization

“Serialization is the process of converting an object or a graph of connected objects into a stream of
bytes. Deserialization is the process of converting a stream of bytes back into its graph of connected objects.”
序列化和反序列化支持我们把对象信息存储到bytes里,然后通过bytes去构建对象。

“When serializing an object, the full name of the type and the name of the type’s defining assembly are written to the stream.When deserializing an object, the formatter first grabs the assembly identity and ensures that the assembly is loaded into the executing AppDomain by calling System.Reflection.Assembly’s Load method.”
从上面可以看出来,序列化和反序列化的关键技术是通过写入类型信息和类型数据到byte里,然后通过反射实例化出对象。
反序列化的时候一定要确保正确的Assembly被加载,并且类型信息要和序列化时候使用的类型信息对应上。

那么怎么样才是使得类型信息支持序列化了?
我们需要在支持序列化的类型的定义前面加上flag:
[Serializable]]
“the SerializableAttribute attribute is not inherited by derived types.”
序列化的flag只对父类有效。
既然可以指定可序列化,那么当然也可以指定不支持序列化的flag:
[NonSerialized]
那么这些不支持反序列化的成员信息,如何确保在反序列化的时候初始化到正确的值了?
这里就需要一个flag:
[OnDeserialized]
OnDeserialized标记的方法会在反序列化该类型的时候被调用,用于初始化那些NonSerialized的成员信息。
那么如果我们在将来添加了类型信息里的成员定义,反序列化的时候需要怎样才能保证不出错了?
只需要在新添加的成员定义前添加下面这个flag:
[OptionalFieldAttribute]
Specifies that a field can be missing from a serialization stream so that the BinaryFormatter and the SoapFormatter does not throw an exception.
标记该成员可以在序列化的时候缺失,不抛出异常。
更多的序列化相关控制标记参见如下:
[OnSerializing] – called during serialization of an object
[OnSerialized] – called after serialization of an object
[OnDeserializing] – called during deserialization of an object
[OnDeserialized] – called immediately after deserialization of an object
Note:
定义了以上flag的方法必须带一个StreamingContext的参数

接下来让我们详细看看Serialize的过程:
先大概了解下Serialization和Deserialization的使用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[Serializable]
publci class Map
{
......
}

//Serialization
BinaryFormatter bf = new BinaryFormatter ();
FileStream fs = File.Open (mMapSavePath, FileMode.Open);
bf.Serialize (fs, mMap);
fs.Close ();

//Deserialization
BinaryFormatter bf = new BinaryFormatter ();
FileStream fs = File.Open (mMapSavePath, FileMode.Open);
mMap = (Map)bf.Deserialize (fs);
fs.Close ();

以下内容源于《CLR via C#》
Serialize Steps:

  1. The formatter calls FormatterServices’s GetSerializableMembers method.
    public static MemberInfo[] GetSerializableMembers(Type type, StreamingContext context);
    首先获取所有需要Serialize的成员信息(MemberInfo),返回MemberInfo[]
  2. The object being serialized and the array of System.Reflection.MemberInfo objects are then passed to FormatterServices’ static GetObjectData method.
    通过MemberInfo去获取Object里成员信息的值,存储在Object[]里
  3. The formatter writes the assembly’s identity and the type’s full name to the stream.
    把相关的assembly identity,type名字写入stream
  4. The formatter then enumerates over the elements in the two arrays, writing each member’s name and value to the stream.
    最后把所有MemberInfo名字(MemberInfo[]里)和实际Object成员值(Objectp[]里)分别对应写入stream。

Deserialize Steps:

  1. 首先通过写入stream的assembly identity和type name去判断对应的Assembly是否已经加载。
    如果加载了就通过FormatterServices::GetTypeFromAssembly去获取需要deserialize的type信息
  2. 然后通过FormatterServices::GetUninitializedObject去预分配内存但不调用构造函数,所有成员数据为null or 0
  3. 然后利用FormatterSerices::GetSerializableMembers得到支持序列化的类型成员信息用于构建和初始化
  4. 读取之前序列化保存成员数据信息
  5. 利用前面得到的支持序列化的成员信息和读取出的成员数据信息去初始化Object。FormatterServices::PopulateObjectMembers方法负责填充数据。

因为序列化底层是通过反射来实现的,但反射是很慢的,如何高效的序列化数据了?
前面我们提到,序列化和反序列化真正去填充或读取的序列化和反序列数据是在调用FormatterServices::GetObjectData方法里。而默认的GetObjectData的数据填充是通过反射来实现的,所以我们只要使支持序列化的类实现ISerializaable的GetObjectData去自定义数据填充就能避免数据填充式reflection的使用。
确保传入GetObjectData的数据安全,在GetObjectData定义前加上:
[SecurityPermissionAttribute(SecurityAction.Demand, SerializationFormatter = true)]
还有就是需要定义一个特殊的构造函数,在Deserialization之前会被调用,这里会传入我们反序列化的SerializationInfo数据,然后我们通过IDeserializationCallback.OnDeserialization(Object sender)去填充数据完成反序列化。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
[Serializable]
public class Map : ISerializable, IDeserializationCallback
{
//Special construct(required by ISerializable) to control deserialization
[SecurityPermissionAttribute(SecurityAction.Demand, SerializationFormatter = true)]
protected Map(SerializationInfo info, StreamingContext context)
{
Console.WriteLine("protected Map(SerializationInfo info, StreamingContext context) called!");
m_SiInfo = info;
}

[SecurityPermissionAttribute(SecurityAction.Demand, SerializationFormatter = true)]
public virtual void GetObjectData(SerializationInfo info, StreamingContext context)
{
Console.WriteLine("Map::GetObjectData() called!");
info.AddValue("mID", mID);
info.AddValue("mMapName", mMapName);
}

void IDeserializationCallback.OnDeserialization(Object sender)
{
Console.WriteLine("Map::OnDeserialization() called!");
if (m_SiInfo == null)
{
return;
}

mID = m_SiInfo.GetInt32("mID");
mMapName = m_SiInfo.GetString("mMapName");
}

private SerializationInfo m_SiInfo;

public Map()
{
mID = 0;
mMapName = "DefaultMap";
}

public int ID
{
get
{
return mID;
}
set
{
mID = value;
}

}
private int mID;

public string MapName
{
get
{
return mMapName;
}
set
{
mMapName = value;
}
}
private string mMapName;
}

static void Main(string[] args)
{
string mMapSavePath = "./mapInfo.dat";
Map mMap = new Map();
mMap.ID = 110;
mMap.MapName = "TonyMap";
BinaryFormatter bf = new BinaryFormatter ();
if (!File.Exists(mMapSavePath))
{
FileStream fsc = File.Create(mMapSavePath);
fsc.Close();
}

FileStream fs = File.Open(mMapSavePath, FileMode.Open);
bf.Serialize(fs, mMap);
fs.Close();

//Deserialization
Map mDSMap;
BinaryFormatter dsbf = new BinaryFormatter ();
if (File.Exists(mMapSavePath))
{
FileStream dsfs = File.Open(mMapSavePath, FileMode.Open);
mDSMap = (Map)dsbf.Deserialize(dsfs);
dsfs.Close();
Console.WriteLine(string.Format("Map.ID = {0}, Map.MapName = {1}", mDSMap.ID, mDSMap.MapName));
}
}

Serialization
可以看到我们成功自定义了数据的填充和解析,避免了不必要的reflection调用。(

—————————2018/04/22————————————-
但实际测试发现并没有加快序列化和反序列化的速度,反而增加了内存开销。详情参考:Data-Config-Automation)
—————————2018/04/22————————————-

更多关于Serialization学习参考《CLR via C#》 – Runtime Serialization章节

Note:
The .NET Framework also offers other serialization technologies that are designed
more for interoperating between CLR data types and non-CLR data types. (以下serialization技术支持CLR data type和non-CLR data types之间的交互,支持从XML序列化和反序列化,这里暂时没有深入学习了解)

  1. System.Xml.Serialization.XmlSerializer class
  2. System.Runtime.Serialization.DataContractSerializer class
    还有一种方式序列化是通过SoapFormatter类,.soap格式。
    还有一种高效的平台无关话的序列化反序列化方式,参见Google Protocol Buffer学习
    待续……

Platform Invoke

跨语言的调用,比如managed的C#调用unmanaged的C++代码
DllImport – Allow reusing existing unmanaged code in a managed application.

DllImport Attribute在DllImport的时候很重要,确保我们能找到正确的unmanaged function,传入正确的参数类型等。

DllImport Attribute有以下几个重要的参数:
EntryPoint – 指明我们将要导入的unmanaged方法名(只有指明正确的方法名,才能找到该方法)
CharSet – 指明如何去处理string类型,比如unicode or ansi(宽字符和单个字符是不一样的)
CallingConvention – 指明函数的调用约定(一般我们会涉及到__stdcall和__cdecl,前者是C++的标准调用约定,后者是C语言调用约定,只有用同样的调用约定我们才能正确调用方法)
Note:
不同的调用约定会决定参数的传入顺序,传参方式,堆栈维护,如何生成方法名等。

普通参数类型的调用事例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
MyMath.h
#ifndef MATH_H
#define MATH_H
#endif

#include "stdafx.h"

#define UTILITYDLL_API _declspec(dllexport)

class MyMath
{
public:
static UTILITYDLL_API int __stdcall MyAdd(int a, int b);

static UTILITYDLL_API int __cdecl MySubstract(int a, int b);
};

MyMath.cpp
#include "stdafx.h"
#include "MyMath.h"
#include <string>

using namespace std;

UTILITYDLL_API int __stdcall MyMath::MyAdd(int a, int b)
{
return a + b;
}

UTILITYDLL_API int __cdecl MyMath::MySubstract(int a, int b)
{
return a - b;
}

extern "C"
{
UTILITYDLL_API double MyMultiple(double a, double b)
{
return a * b;
}
};

UTILITYDLL_API double __stdcall MyDivision(double a, double b)
{
return a / b;
}

struct MyStruct
{
int mID;
bool mBMan;
};
extern "C"
{
UTILITYDLL_API int ModifyMyStruct(MyStruct* ms)
{
if(ms->mBMan == true)
{
ms->mBMan = false;
return sizeof(MyStruct);
}else
{
ms->mID = -1;
ms->mAge = -1;
return sizeof(MyStruct);
}
}
};
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Collections;
using System.Collections.Generic;

using System.Runtime.InteropServices;

namespace CSharpStudy
{
class Program
{
#region DLL import Study
[StructLayout(LayoutKind.Explicit, Pack = 1)]
public struct MyStruct
{
[FieldOffset(0)] public int mID;
[FieldOffset(4)] public bool mBMan;
[FieldOffset(5)] public int mAge;
}

[StructLayout(LayoutKind.Sequential)]
public struct MyStruct2
{
public int mID;
public bool mBMan;
public int mAge;
}

[StructLayout(LayoutKind.Explicit)]
public struct MyStructExplicit
{
[FieldOffset(0)] public Byte mByte;
[FieldOffset(4)] public int mID;
}

[StructLayout(LayoutKind.Explicit, Pack = 1)]
public struct MyStructExplicit2
{
[FieldOffset(0)]
public Byte mByte;
[FieldOffset(1)]
public int mID;
}

[StructLayout(LayoutKind.Explicit, Pack = 2)]
public struct MyStructExplicit3
{
[FieldOffset(0)] public Byte mByte;
[FieldOffset(2)] public int mID;
}

[DllImport("TESTDLL.dll", EntryPoint = "?MyAdd@MyMath@@SGHHH@Z")]
public static extern int MyAdd(int a, int b);

[DllImport("TESTDLL.dll", EntryPoint = "?MySubstract@MyMath@@SAHHH@Z", CallingConvention = CallingConvention.Cdecl)]
public static extern int MySubstract(int a, int b);

[DllImport("TESTDLL.dll", EntryPoint = "MyMultiple", CallingConvention = CallingConvention.Cdecl)]
public static extern double MyMultiple(double a, double b);
#endregion

static void Main(string[] args)
{
#region DLL import Study
int a = 1;
int b = 2;
int sum = 0;
sum = MyAdd(a, b);
Console.WriteLine(string.Format("{0} + {1} = {2}", a, b, sum));

int substractionresult = 0;
substractionresult = MySubstract(a, b);
Console.WriteLine(string.Format("{0} - {1} = {2}", a, b, substractionresult));

double multiplier = 1;
double multiplicand = 2;
double multipleresult = 0;
multipleresult = MyMultiple(multiplier, multiplicand);
Console.WriteLine(string.Format("{0} * {1} = {2}", multiplier, multiplicand, multipleresult));

double divisor = 1;
double dividend = 2;
double divisionresult = 0;
divisionresult = MyDivision(divisor, dividend);
Console.WriteLine(string.Format("{0} / {1} = {2}", divisor, dividend, divisionresult));

MyStruct ms = new MyStruct();
MyStruct2 ms2 = new MyStruct2();
MyStructExplicit mse = new MyStructExplicit();
MyStructExplicit2 mse2 = new MyStructExplicit2();
MyStructExplicit3 mse3 = new MyStructExplicit3();
Int32 sizeofms = 0;
ms.mID = 4;
ms.mBMan = false;
ms.mAge = 4;
ms2.mID = 5;
ms2.mBMan = false;
ms2.mID = 5;
mse.mID = 1;
mse.mByte = 1;
mse2.mID = 2;
mse2.mByte = 2;
mse3.mID = 3;
mse3.mByte = 3;
Console.WriteLine(string.Format("sizeof(MyStruct) = {0}", Marshal.SizeOf(ms)));
Console.WriteLine(string.Format("sizeof(MyStructExplicit) = {0}", Marshal.SizeOf(mse)));
Console.WriteLine(string.Format("sizeof(MyStructExplicit2) = {0}", Marshal.SizeOf(mse2)));
Console.WriteLine(string.Format("sizeof(MyStructExplicit3) = {0}", Marshal.SizeOf(mse3)));
Console.WriteLine(string.Format("ms.mID = {0}, ms.mBMan = {1}, ms.mAge = {2}", ms.mID, ms.mBMan, ms.mAge));
sizeofms = ModifyMyStruct(ref ms);
Console.WriteLine(string.Format("ms.mID = {0}, ms.mBMan = {1}, ms.mAge = {2}, sizeofms = {3}", ms.mID, ms.mBMan, ms.mAge, sizeofms));
Console.WriteLine(string.Format("ms2.mID = {0}, ms2.mBMan = {1}, ms2.mAge = {2}", ms2.mID, ms2.mBMan, ms2.mAge));
sizeofms = ModifyMyStruct2(ref ms2);
Console.WriteLine(string.Format("ms2.mID = {0}, ms2.mBMan = {1}, ms2.mAge = {2}, sizeofms = {3}", ms2.mID, ms2.mBMan, ms2.mAge, sizeofms));
#endregion

Console.ReadKey();
}
}
}

Output:
PlatformInvokeDemo

分析上述事例:
针对MyAdd方法我们定义了__Stdcall的C++调用约定方式,而且属于类的静态方法,所以在C#中import的时候我们需要指明具体的方法名(通过VS自带的dumpbin我们可以打出TESTDLL.dll里的符号表信息 – dmpbin.exe /all TESTDLL.dll > TestDllDump.txt,我们可以找到MyAdd方法的具体方法名,否者会显示找不到EntryPoint方法MyAdd),原本还需要指明调用约定为__stdcall,但CallingConvention的默认值就是__stdcall,所以这里就不用指明了。

针对MySubstract方法我们定义了__cdecl的C调用约定方式,而且属于类的静态方法,所以我们在C#中import的时候需要指明具体的方法名和指明调用约定为CallingConvention = CallingConvention.Cdecl,否者会显示调用堆栈不对称等问题。

针对MyMultiple方法我们定义了__cdecl的C调用约定方式,同时是全局方法,所以我们只需要指明调用约定为__cdecl,直接指明调用方法为MyMultiple就能找到MyMultiple方法了。

针对MyDivision方法我们定义了__stdcall的C++调用约定方式,同时是全局方法,但由于__stdcall调用约定对方法名生成的方式(包含函数名,参数字节数信息等),我们不能直接通过MyDivision来调用MyDivision方法而需要在EntryPoint里指定方法全名。

除了找到正确的函数方法和指定正确的函数调用约定等信息外,我们在Manage code调用Unmanaged code的时候需要保证Manage struct和Unmanage struct的内存布局要一致,以确保unmanaged code访问managed数据的时候拿到正确信息。

These structures can have any legal name; there is no relationship between the native and managed version of the two structures other than their data layout. Therefore, it is vital that the managed version contains fields that are the same size and in the same order as the native version.
从官网可以看出,managed code的函数名字并不重要,我们必须确保结构体的内存布局要一致。

那我们为何要进行内存对齐了?
一下参考百度百科:

平台原因(移植原因)
不是所有的硬件平台都能访问任意地址上的任意数据的;某些硬
件平台只能在某些地址处取某些特定类型的数据,否则抛出硬件异常。
性能原因
数据结构(尤其是栈)应该尽可能地在自然边界上对齐。原因在于,为了访问
未对齐的内存,处理器需要作两次内存访问;而对齐的内存访问仅需要一次访问

从测试例子里可以看出,当我们通过StructLayoutAttribute.Pack指定MyStuct的内存对齐方式是1的时候,MyStruct的大小只有9bytes,而MyStruct2使用默认采用结构体内数据最大的值作为对齐方式,事例中是int即4bytes为对齐方式,MyStruct2的大小是12bytes。反观C++返回的MyStruct的大小是12(可以看出是以4bytes为对齐方式),所以如果我们传递MyStruct的时候访问数据就出问题了,而MyStruct2访问到了正确的数据。

而后续的MyStructExplicit,MyStructExplicit2,MyStructExplicit3则展示了通过制定Pack的值(即内存对齐大小)是如何影响数据结构的大小的。

更多:
StructLayoutAttribute.Pack
在 Visual Studio 2015 之前,可以使用 Microsoft 专用关键字 __alignof 和 declspec(alignas) 来指定大于默认值的对齐方式。从 Visual Studio 2015 开始,应使用 C++11 标准关键字 alignof 和 alignas (C++) 以获得最高代码可移植性。

Event

C#里的Event响应相当于listener and obsever 模式里触发监听回调。Delegate好比C++里的回调 – 当事件发生时调用

“The common language runtime’s (CLR’s) event model is based on delegates.”
event key word

定义Event我们需要做一下几件事,下面模拟邮件收发事件提醒为例:

  1. 定义EventArgs(这个必须继承至EventArgs,里面包含了我们事件发生时所需要传递的信息,如果什么都不需要传递,使用EventArgs即可)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// Define a type that will hold any additional information that should be sent to receivers of the event notification
internal class NewMailEventArgs : EventArgs
{
public NewMailEventArgs(String from, String to, String subject)
{
m_From = from;
m_To = to;
m_Subject = subject;
}

public String From{ get { return m_From; } }

private readonly String m_From;

public String To { get { return m_To; } }

private readonly String m_To;

public String Subject { get { return m_Subject; } }

private readonly String m_Subject;
}
  1. 定义Event成员,用于指定监听什么样的Event
1
2
3
4
5
6
7
8
9
10
11
12
class EmailManager
{
// Define the event member
// 虽然这里只有简短的一句话,
// 但是编译器会给我们去定义关于此事件的监听添加和删除代码
// 详情请看下面截图
// 这样一来我们只要通过NewEmail就能添加和删除监听NewMailEventArgs事件的delegate了
// EventHandler决定了我们监听事件的Delegate原型如下
// public delegate void EventHandler(object sender, EventArgs e);
// 监听的事件是NewMailEventArgs
public event EventHandler<NewMailEventArgs> NewMail;
}
![EventExtraCode](/img/CSharp/EventDefinition.PNG)
  1. 定义需要监听事件的类并提供添加和删除监听的方法
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
class Fax
{
public Fax(EmailManager em)
{
// Trigger add_NewMail method
em.NewMail += FaxMsg;
}

// Delegate that is used to listen for NewMailEventArgs
public void FaxMsg(Object sender, NewMailEventArgs e)
{
Console.WriteLine("Faxing mail message:");
Console.WriteLine("From = {0}, To = {1}, Subject = {2}", e.From, e.To, e.Subject);
}

// Remove listener for NewMailEventArgs
public void Unregester(EmailManager em)
{
// Trigger remove_NewMail method
em.NewMail -= FaxMsg;
}
}
  1. 定义事件触发和事件通知方法
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
class EmailManager
{
......

// Define a method responsible for raising the event
// to notify registered objects that the event has occurred
// If this class is sealed, make this method private and nonvirtual
// 事件通知
protected virtual void OnNewMail(NewMailEventArgs e)
{
// Copy a reference to the delegate field now into a temporary field for thread safety
// Be careful race condition
EventHandler<NewMailEventArgs> temp = NewMail;
// 这里有thread safe问题,但由于delegate is immutable,
// 所以我们把NewMail传递给临时变量temp后,无论别人如何改变NewMail都没关系了
// If any methods registered interest with our event, notify them
if (temp != null)
{
temp(this, e);
}
}

// Define a method that translates the input into the desired event
// 事件触发
public void SimulateNewMail(String from, String to, String subject)
{
// Hold the information we want to pass
NewMailEventArgs e = new NewMailEventArgs(from, to, subject);

// Call OnNewMail to notify registered objects that the event has occured
OnNewMail(e);
}
}
  1. 测试Event
1
2
3
4
5
6
7
8
9
10
11
12
static void Main(string[] args)
{
EmailManager emailmanager = new EmailManager();

Fax fax = new Fax(emailmanager);

emailmanager.SimulateNewMail("Tony", "Tom", "Hello World!");

fax.Unregester(emailmanager);

emailmanager.SimulateNewMail("Tom", "Tony", "Hello World Again!");
}

Test Result:
EventTestResult
可以看出我们成功的添加了自定义的事件监听也成功的移除了事件监听。

Note:
一般来说,设计事件监听都会设计成通过Dictionary来存储EventKey和Delegate,通过判断Dictinary里面是否存在该事件来添加Delegate,如果不存在则添加该事件监听。删除监听同理。

Chars, Strings, and Working with Text

Chars:
“In the .NET Framework, characters are always represented in 16-bit Unicode code values, easing the development of global applications.A character is represented with an instance of the System.Char structure (a value type).”

说到字符和字符串,就不得不提字符编码了,从上面可以看出,.NET的Char都是采用16bit的Unicode编码,主要是为了语言通用话(包含所有的字符编码)。还有一点就是Char是Structure是Value type。

针对字符本身的方法(Char):
Char还提供了很多获取字符具体类型等相关信息的方法,e.g IsDigit, IsLetter,
IsWhiteSpace, IsUpper,GetUnicodeCategory(得到字符关于Unicode的分类信息)……

1
2
3
4
5
6
Char c = 'a';
UnicodeCategory uc = Char.GetUnicodeCategory(c);
Console.WriteLine("UnicodeCategory = {0}", uc.ToString());
c = '1';
uc = Char.GetUnicodeCategory(c);
Console.WriteLine("UnicodeCategory = {0}", uc.ToString());

针对字符全球化(CultureInfo):
在Char里面很多方法都有包含CultureInfo参数类型的版本,这个是针对全球化指定特定语言。

1
2
CultureInfo ci = CultureInfo.CurrentCulture;
Console.WriteLine("CurrentCulture = {0}", ci);

CharAndCultureInfoOutput

Strings:
“The String type is derived immediately from Object, making it a reference type.”

String的构建:

  1. 通过Literal string构建
    虽然String是reference type,但我们构建String的时候不是通过new,而直接通过literal string(e.g. String s = “Tony”;)
  2. 支持像C++里那样特殊符号代表特定含义
    String s = “Hi\r\nthere”;
  3. 跨平台考虑
    特殊符号在不同平台有不同的表示方式,所以出于跨平台考虑,我们最好使用Environment里的变量来表示特定环境的特定符号
1
2
3
4
5
6
String s1= "Tony";
String s2 = "Hi\nTony!";
String s3 = "Hi" + Environment.NewLine + "Tom!";
Console.WriteLine("s1 = {0}", s1);
Console.WriteLine("s2 = {0}", s2);
Console.WriteLine("s3 = {0}", s3);

StringConstructOutput

多个String合并:
多个字符串合并的时候最主要的是避免通过重复的literal string去构建String,因为String是reference type,多个literal string的构建会在heap上分配多个内存。正确的是通过System.Text.StringBuilder去构建。

“StringBuilder’s members allow you to manipulate this character array, effectively shrinking the string or changing the characters in the string.”

Unlike a String, a StringBuilder represents a mutable string. This means that most of StringBuilder’s members change the contents in the array of characters and don’t cause new objects to be allocated on the managed heap.”

可以看出StringBuilder之所以不会构建多个String是因为它内部构建了可变的character array,这样允许我们在StringBuilder里操作的时候不会触发新的String构建。这样以来我们就可以利用里面现有的String去动态构建我们需要的String了。
Error:

1
String s = "Hi" + "there!";

Right:

1
2
3
4
5
6
7
8
String sp1 = "Tony";
String sp2 = " and ";
String sp3 = "Tom";
StringBuilder sb = new StringBuilder("Hello ", 50);
sb.Append(sp1);
sb.Append(sp2);
sb.Append(sp3);
Console.WriteLine("sb = {0}", sb.ToString());

StringCatenationOutput

String比较:
String.Compare(***)
……
比较的时候需要注意可能有不同国家的语言,这里需要注意需要传入CultureInfo作为比较的语言环境参数。

同时当我们的代码文件中直接书写了特定国家语言或语言Unicode编码的时候,我们需要把文件存储为Unicode格式,否则到时候编译器无法正常解析。

程序中大量的比较特别是针对特定国家语言的String比较很费时,我们应该尽量避免。

同时String是immuatable的,我们可以重复利用现有的String,无需大量重复构造相同的String去增加memory负担。
CLR里有一个叫internal hash table的东西,所有的Strings作为key,所有String的reference作为value。因为String是immutable的且是reference type,我们可以通过访问internal hash table去查看是否存在现有String,这样一来就避免了重构相同的String。

1
2
3
String ss = "internal string";
String sintern = String.Intern(ss);
Console.WriteLine("sintern = {0}",sintern);

StringConstructionWithMemorySave

“String objects referred to by the internal hash table can’t be freed until the AppDomain is unloaded or the process terminates.”

“System.Runtime.CompilerServices.CompilationRelaxations.NoStringInterning flag will control whether to inern all of the string.”

Security String:
System.Security.SecureString ……

Note:
String object is immutable. The String class is sealed, which means that you cannot use it as a base class for your own type.

Enumerated and Bit Flags

  1. Enum
    “Every enumerated type is derived directly from System.Enum, which is derived from System.ValueType, which in turn is derived from System.Object”
    首先可以看出Enumerated属于Value Type。
    这里还是说一下使用Enumerate的好处:
    1. “Enumerated types make the program much easier to write, read, and maintain.”(可视化,容易看懂,无需hard code)
    2. “Enumerated types are strongly typed.”(类型传递不对,编译器会报错)
      Enumerate在C#里作为一个最基础的type,且是面向对象的,C#给我们提供了很多可以相互转化的方法(e.g. 比如Enum到String – ToString() String到Enum – Parse())
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
enum Colors
{
BLACK = 0,
RED = 1,
GREEN = 2,
BLUE = 3,
WHITE = 4
};

static void Main(string[] args)
{
Colors c = Colors.RED;
Console.WriteLine("Decimal format: c = {0}", c.ToString("D"));
Console.WriteLine("General format: c = {0}", c.ToString("G"));
Colors c2 = (Colors)Enum.Parse(typeof(Colors), "GREEN", true);
Console.WriteLine("Decimal format: c2 = {0}", c2.ToString("D"));
Console.WriteLine("General format: c2 = {0}", c2.ToString("G"));
}
Output:
![EnumeratesOuput](/img/CSharp/Enumerates.PNG)
    还有一点值得注意的就是enum无法定义methods,properties,or events。
    针对methods我们可以通过C#里extention methods(详情见Methods那一节)的特性给enum添加methods。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[Flags]
enum Actions
{
NONE = 0,
READ = 0x0001,
WRITE = 0x0002,
READANDWRITE = READ | WRITE,
DELETE = 0x0004,
QUERY = 0x0008,
Sync = 0x0010
};

internal static class ActionsExtensionMethods
{
public static Actions Set(this Actions flags, Actions setflags)
{
return flags | setflags;
}
}

static void Main(string[] args)
{
Actions actions = Actions.READ;
Console.WriteLine("actions = {0}", actions.ToString());
actions = actions | Actions.DELETE;
Console.WriteLine("actions = {0}",actions.ToString());
actions = actions.Set(Actions.WRITE);
Console.WriteLine("actions = {0}", actions.ToString());
}
Output:
![EnumWithExtentionMethod](/img/CSharp/EnumWithExtentionMethod.PNG)
    Note:
    "Symbols defined by an enumerated type are constant values."
  1. Bit
    如果说Enum是一个成员代表一个含义,那么Bit可以看做是一个Bit代表一组含义。
    最经常用到的地方就是File的访问控制权限:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public enum FileAttributes {
ReadOnly = 0x00001,
Hidden = 0x00002,
System = 0x00004,
Directory = 0x00010,
Archive = 0x00020,
368 PART III Essential Types
Device = 0x00040,
Normal = 0x00080,
Temporary = 0x00100,
SparseFile = 0x00200,
ReparsePoint = 0x00400,
Compressed = 0x00800,
Offline = 0x01000,
NotContentIndexed = 0x02000,
Encrypted = 0x04000,
IntegrityStream = 0x08000,
NoScrubData
}

这里有个比较方便的用法,可以把enum看做一组Bits。
定义enum的时候加上前缀[Flags],可以使enum的成员被看做一组bits。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[Flags]
enum Actions
{
NONE = 0,
READ = 0x0001,
WRITE = 0x0002,
READANDWRITE = READ | WRITE,
DELETE = 0x0004,
QUERY = 0x0008,
Sync = 0x0010
};

static void Main(string[] args)
{
Actions actions = Actions.READ;
Console.WriteLine("actions = {0}", actions.ToString());
actions = actions | Actions.DELETE;
Console.WriteLine("actions = {0}",actions.ToString());
}

BitWithFlag
BitWithoutFlag

Custom Attributes

“they’re just a way to associate additional information with a target.The compiler emits this additional information into the managed module’s metadata.”
上面这句话可以理解成,custom attributes是为了关联一些额外的信息到特定的目标上(这里的目标可以是class,event,methods…..),这些额外的信息是被编译器编译保存到了module的metadata里。

让我们看个Custom Attributes例子:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
using System;
[assembly: SomeAttr] // Applied to assembly
[module: SomeAttr] // Applied to module
[type: SomeAttr] // Applied to type
internal sealed class SomeType<[typevar: SomeAttr] T> { // Applied to generic type variable
[field: SomeAttr] // Applied to field
public Int32 SomeField = 0;
[return: SomeAttr] // Applied to return value
[method: SomeAttr] // Applied to method
public Int32 SomeMethod(
[param: SomeAttr] // Applied to parameter
Int32 SomeParam)
{
return SomeParam;
}

[property: SomeAttr] // Applied to property
public String SomeProp {
[method: SomeAttr] // Applied to get accessor method
get { return null; }
}

[event: SomeAttr] // Applied to event
[field: SomeAttr] // Applied to compiler-generated field
[method: SomeAttr] // Applied to compiler-generated add & remove methods
public event EventHandler SomeEvent;
}

从上面可以看出我们可以定义custom attribute的范围很广,包括assembly,module,type,filed,method…….

“A custom attribute is simply an instance of a type.”
Custom attribute其实也是一个类,只是我们定义custom attribute的时候触发了这些类的构造,把custom attribute的信息写入到了metadata里。

那么知道了Custom attribute是一个类,那么怎么定义Custom attribute了?
“Common Language Specification (CLS) compliance, custom attribute classes must be derived, directly or indirectly, from the public abstract System.Attribute class.”
从上面可以看出custom attribute必须直接或间接的继承至System.Attribute class.

接下来我们使用MSDN上的例子来详细了解下Custom Attribute是怎么工作的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
using System;
using System.Reflection;

namespace CustomAttrCS {
// An enumeration of animals. Start at 1 (0 = uninitialized).
public enum Animal {
// Pets.
Dog = 1,
Cat,
Bird,
}

// A custom attribute to allow a target to have a pet.
// 定义Custom attribute必须直接或间接继承至Attribute
public class AnimalTypeAttribute : Attribute {
// The constructor is called when the attribute is set.
// 构建的时候我们可以去设置含参和不含参构造函数
public AnimalTypeAttribute()
{
thePet = Animal.Bird;
}

public AnimalTypeAttribute(Animal pet) {
thePet = pet;
}

// Keep a variable internally ...
protected Animal thePet;

// .. and show a copy to the outside world.
public Animal Pet {
get { return thePet; }
set { thePet = Pet; }
}
}

// A test class where each method has its own pet.
class AnimalTypeTestClass {
// 定义custom attribute的时候,我们可以调用对应构造函数
[AnimalType(Animals.DOG)]
public void DogMethod() { }
// 除了调用构造函数外,我们还可以调用Custom Attribute Class的Property设定特定值
[AnimalType(Pet = Animals.CAT)]
public void CatMethod() { }

[AnimalType()]
public void BirdMethod() { }
}

class DemoClass {
static void Main(string[] args) {
// 通过反射去检查AnimalTypeTestClass里的方法是否定义了Attribute
AnimalTypeTestClass testClass = new AnimalTypeTestClass();
Type type = testClass.GetType();
// Iterate through all the methods of the class.
foreach(MethodInfo mInfo in type.GetMethods()) {
// Iterate through all the Attributes for each method.
foreach (Attribute attr in
Attribute.GetCustomAttributes(mInfo)) {
// Check for the AnimalType attribute.
if (attr.GetType() == typeof(AnimalTypeAttribute))
Console.WriteLine(
"Method {0} has a pet {1} attribute.",
mInfo.Name, ((AnimalTypeAttribute)attr).Pet);
}

}
}
}
}

Output:
CustomAttribute
Note:
“all non-abstract attributes must contain at least one public constructor.”(非abstract attributes必须至少有一个public构造函数)

知道了如何自定义Custom Attribute,但Attribute可以用于Assembly,module,type…..,我们如何限制其使用的地方了?
AttributeUsageAttribute用于指定Custom Attribute的使用范围。
AttributeTargets包含了所有可指定的使用范围。
同时AttributeTargets还有连个成员变量,m_allowMultiple,m_inherited,前者决定这个attribute是否允许针对同一个target设定多个,后者决定含该attribute修饰的类的子类是否继承AttributeUsage设定。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[AttributeUsage(AttributeTargets.Method, Inherited = false)]public class AnimalTypeAttribute : Attribute {
......
}

class AnimaTypeTestClass
{
// 因为AnimaTypeAttribute定义了只对Method有效,所以这里对构造函数就无效
//[AnimalType(Animals.CAT)]
//AnimaTypeTestClass() { }

[AnimalType(Animals.DOG)]
public void DogMethod() { }

......
}

知道了Custom Attribute的定义和其限制作用,那么Custom Attribute有什么实际意义了?
还记得在Enumerated Types and Bit Flags讲到的[FLAG]标记改变了Enum.ToString(),Format()行为吗?
正是因为我们动态检查了绑定在Enum上的Flag属性导致的。
而实现动态检查的底层方法是通过reflection(反射 – 参见Hosting,AppDomain,Assembly,Reflection章节)
还记得之前MSDN的例子是如何检查类里各方法是否定义了Attribute吗(这里就是使用了反射)?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// 通过反射去检查AnimalTypeTestClass里的方法是否定义了Attribute
AnimalTypeTestClass testClass = new AnimalTypeTestClass();
Type type = testClass.GetType();
// Iterate through all the methods of the class.
foreach(MethodInfo mInfo in type.GetMethods()) {
// Iterate through all the Attributes for each method.
foreach (Attribute attr in
Attribute.GetCustomAttributes(mInfo)) {
// Check for the AnimalType attribute.
if (attr.GetType() == typeof(AnimalTypeAttribute))
Console.WriteLine(
"Method {0} has a pet {1} attribute.",
mInfo.Name, ((AnimalTypeAttribute)attr).Pet);
}
}

这样一来我们就可以动态的判断是否定义了Attribute并获取Attribute里定义的信息。
.NET里有一个System.Reflection.CustomAttributeExtensions class,这个了定义了真多各个target(module,event,method……)的获取关于自定义信息的三个方法:

  1. IsDefined
  2. GetCustomAttributes
  3. GetCustomAttribute
    第二和第三方法调用的时候会触发Attribute Class的构造函数,那么我们怎么才能在不触发Attribute Class构造函数的情况下获取Attrbute信息了?
    答案是:System.Reflection.CustomAttributeData的GetCustomAttributes方法(利用反射,但要注意的是CustomAttributeData的GetCustomAttributes只有四个版本分别是Assembly,Module, ParameterInfo and MemberInfo)

知道了如何去检查method是否包含Custom Attribute,那么我们怎样去判断两个instances完全一样了(所有Custom Attribute都一样)
这里我们可以通过System.Attribute的Equals方法去判断,这里的Equals被重写了,会通过reflection去检查每一个attribute进行比较。
除了上述方法我们也可以在自定义的Attribute里重写Equal和Match方法去实现特定比较判断。

使用和定义Custom attribute的时候需要注意的点:

  1. “When applying an attribute to a target in source code, the C# compiler allows
    you to omit the Attribute suffix to reduce programming typing and to improve the
    readability of the source code.”(注意定义custom attribute的时候,我们可以省略attribute后缀)
  2. “When defining an attribute class’s instance constructor, fields, and properties, you must restrict yourself to a small subset of data types.”(当定义Custom Attribute时,我们只能声明基础类型的fields,properties,constructor(必须符合CLS-compilation))
  3. “Be aware that only Attribute, Type, and MethodInfo classes implement reflection
    methods that honor the Boolean inherit parameter.”(只有Attribute,Type and MethodInfo实现了反射inherit parameter信息的方法)

Exceptions and State Management

What is Exception?
“An expcetion is when a member fails to complete the task it is supposed to perform as indicated by ites name.”

Exception-Handling Mechanics
The .NET Framework exception handling mechanism is built using the Structured Exception Handling(SEH) mechanism offered by Windows.

首先看一下捕获异常的最基本写法:

1
2
3
4
5
6
7
8
9
10
11
12
try{
// Put code requiring graceful recovery and/or cleanup operations here...
}
catch(excetion)
{
// Put code that recovers from any kind of exception
}
finally
{
// Put code that cleans up any operations started within the try block here...
// The code in here ALWAYS executes, regardless of whether an exception is thrown.
}

Try Block:
“A try block contains code that requires common cleanup operations, exception recovery operations, or both.”

Note:
“Sometimes developers ask how much code they should put inside a single try
block. The answer to this depends on state management.”

Catch Block:
“A catch block contains code to execute in response to an exception.”

Note:
“When debugging through a catch block by using Microsoft Visual Studio, you can
see the currently thrown exception object by adding the special $exception variable name to a watch window.”(当调试catch block的时候,可以通过查看$exception变量名查看异常信息)

Finally Block:
“A finally block contains code that’s guaranteed to execute. Typically, the code in a finally block performs the cleanup operations required by actions taken in the try block.”

CLS and Non-CLS Exceptions:
CLS(Common Language Specification) – throw Exception-derived objects
Non-CLS – throw Exception not derived from Exception

After CLR 2.0:
“Microsoft introduced a new RuntimeWrappedException class (defined in the System.Runtime.CompilerServices namespace). This class is derived from Exception, so it is a CLS-compliant exception type. The RuntimeWrappedException class contains a private field of type Object (which can be accessed by using RuntimeWrappedException’s WrappedException read-only property). In CLR 2.0, when a non–CLS-compliant exception is thrown, the CLR automatically constructs an instance of the RuntimeWrappedException class and initializes its private field to refer to the object that was actually thrown.”(CLR 2.0之后,通过RuntimeWrapperdException class把所有的Non-CLS Exception都封装成了CLS Exception)

如果想要就支持2.0之前的行为:

1
2
using System.Runtime.CompilerServices;
[assembly:RuntimeCompatibility(WrapNonExceptionThrows = false)]

接下来让我们看看Exception的基类:
Systen.Exception
以下是一些重要的Properties:
ExceptionProperties
必要重要的一些Properties:

  1. Message(描述了和异常相关的重要信息)
  2. StackTrace(描述了导致抛出异常的方法相关信息)

我们也可以通过System.Diagnostics.StackTrace去获取详细的堆栈信息。

但有些时候我们会发现有些方法没有显示在详细的堆栈信息里:
原因有两个:

  1. the stack is really a record of where the thread should return to, not where the thread has come from. (Stack只记录返回点不记录当前点)
  2. The just-in-time (JIT) compiler can inline methods to avoid the overhead of calling and returning from a separate method(JIT编译器使得一些方法称为了inlie的(在当前方法被调用的地方直接展开),从而无法记录到Stack里)

禁止JIT inlie需要用到System.Runtime.CompilerServices.MethodImplAttribute里的MethodImplOption.NoInlining:

1
2
3
4
[MethodImpl(MethodImplOptions.NoInlining)]
public void SomeMethod() {
......
}

FCL(Framework Class Library)里定义很多现成的Exception。

Throwing an Exception:
当我们需要自己抛出异常的时候,我们需要考虑如下:

  1. 哪一个Exccetion class我们应该继承(是否使用现有的Exception class)
  2. 传递什么样的string到exception构造函数里(传递说明为什么方法不能完成要抛出这个异常)

Defining Your Own Exception Class:
自定义Exception类是比较容易出问题且冗长的。
原因如下:
“The main reason for this is because all Exception-derived types should be serializable so that they can cross an AppDomain boundary or be written to a log or database”(我们必须保证自定义的Exception类支持序列化,因为我们可能会跨AppDomain去写入Log或则数据库里)

Note:
“When you throw an exception, the CLR resets the starting point for the exception;
that is, the CLR remembers only the location where the most recent exception object
was thrown.”(当我们再次抛出异常的时候,CLR会重置异常,只记录最近异常Object)

Guidelines and Best Practices:

  1. Use finally Blocks Liberally(finlly always execute, do cleanup operations)
  2. Do not Catch Everything
  3. Recovering Gracefully from an Exception(catch some exceptions that are known in advanced and try to recover from it)
  4. Backing Out of a Partially Completed Operation When an Unrecoverble Exception Occus – Maintaining State
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public void SerializeObjectGraph(FileStream fs, IFormatter formatter, Object rootObj) {
// Save the current position of the file.
Int64 beforeSerialization = fs.Position;
try {
// Attempt to serialize the object graph to the file.
formatter.Serialize(fs, rootObj);
}
catch { // Catch any and all exceptions.
// If ANYTHING goes wrong, reset the file back to a good state.
fs.Position = beforeSerialization;
// Truncate the file.
fs.SetLength(fs.Position);
// NOTE: The preceding code isn't in a finally block because
// the stream should be reset only when serialization fails.
// Let the caller(s) know what happened by re-throwing the SAME exception.
throw;
}
}
Note:
"After you’ve caught and handled the exception, don’t swallow it—let the caller know that the exception occurred. You do this by re-throwing the same exception."(特别是写给别人用的时候,再次抛出异常让使用者可以去捕获并知道发生了什么)
  1. Hiding an Implementation Detail to Maintain a “Contract”
    “you might find it useful to catch one exception and re-throw a different exception.”(抛出更符合当前API行为的异常。或则增加更多符合当前API的异常的信息。)

Unhandled Exceptions:
什么时候会出现Unhandled Exceptions?
“When an exception is thrown, the CLR climbs up the call stack looking for catch blocks that match the type of the exception object being thrown. If no catch block matches the thrown exception type, an unhandled exception occurs.”(当异常被抛出却没有对应的catch的时候,成为Unhandled Exception)
更多内容参考《CLR via C#》 – Unhandled Exceptions

Note:
“When the CLR detects that any thread in the process has had an unhandled exception, the CLR terminates the process.”(当CLR检测到任何线程有未处理的异常的时候,CLR会终止进程)

Debugging Exceptions:
VS -> Debug -> Exception
ExceptionWindow
如果针对特定异常勾选抛出,那么当该异常被抛出的会后,程序会进入断点(帮助我们快速定位特定异常)。不勾选也会进入断点(前提是该异常是unhandled)

也可以通过上述窗口添加自定义的异常。

Exception-Handling Performance Considerations:
……

Constrained Excecution Regions(CERs):
……

更多内容待学习……

The Managed Heap and Garbage Collection

这一小节会讲到CLR里重要的内存管理(GC)。
首先要区分栈(Stack)和堆(Heap)。
下面堆栈的学习参考C# Heap(ing) Vs Stack(ing) in .NET: Part I
栈 – The Stack is more or less responsible for keeping track of what’s executing in our code (or what’s been “called”).
这里栈可以理解为用于记录代码执行顺序。
Note:
栈是LIFO(Last In First Out)原则。
堆 – The Heap is more or less responsible for keeping track of our objects (our data, well… most of it;)
堆是记录那些动态分配内存的(比如reference type)
哪些是分配在栈上,哪些分配在堆上,记住下面两个原则:

  1. A Reference Type always goes on the Heap; easy enough, right? (索引类型都是分配在堆上)
  2. Value Types and Pointers always go where they were declared. This is a little more complex and needs a bit more understanding of how the Stack works to figure out where “things” are declared. (值类型和指针是分配在栈上)
    下面让我们结合实例来看一下是如何分配在栈和堆上的?
1
2
3
4
5
6
7
8
9
10
11
public class MyInt
{
public int MyValue;
}

public MyInt AddFive(int pValue)
{
MyInt result = new MyInt();
result.MyValue = pValue + 5;
return result;
}

当调用AddFive方法的时候,首先函数参数pValue会入栈
StackAndHeapPart1
然后因为我们创建了索引类型的MyInt实例,这时候MyInt会在堆上分配内存,同时在栈上会生成一个指针指向MyInt在堆上的索引地址
StackAndHeapPart2
当我们给result.MyValue赋值时,我们通过result Pointer记录的地址去访问堆上的MyValue成员并修改值。
最后我们返回result时,栈被清理,只剩下堆上我们分配的数据。
StackAndHeapPart3
而剩下堆上的数据,就是由CLR的GC来管理了。
NOTE :
“The method does not live on the stack and is illustrated just for reference.”
接下来看看CLR的GC是如何工作的。
在了解GC之前,让我们看看C#里在堆上分配内存是如何分配的?
这里就不得不提new这个关键字了。
当我们通过new去创建reference type的时候,会经历下列步骤:

  1. Calculate the number of bytes required for the type’s fields)(计算type所需分配的内存)
  2. Add the bytes required for an object’s overhead(contain a type object pointer and a sync block index)(为type分配object pointer和sync block index所需内存 – 如果是32-bit Application则分配8 bytes,如果是64-bit Application则分配16 bytes)
  3. Zero out the memory start at NextObjPtr(Indicates where the next object is to be allocated within the heap). Return reference. Move on NextObjPtr to next address that is available to be allocated..(根据NextObjPtr指向的可用堆上的起始位置分配内存并清零,然后传递指向type的内存起始位置的NextObjPtr到构造函数去进行初始化,初始化完成后返回type的索引,最后把NextObjPtr指向下一个heap可分配内存的位置。)
    知道了我们在堆上是如何分配内存,让我们看看GC是如何工作来管理所有堆上分配的内存的?

让我们来了解下GC Algorithm:

  1. Reference Counting Algorithm(COM use)
    就是我们平时说的索引计数,通过判断当前所有指针指向特定对象的数量来决定是否要回收该对象内存。
    缺点:
    Circular references会导致内存永远无法回收(e.g. A包含了B的索引,B也包含了A的索引)
  2. Reference Tracking Algorithm(CLR use. Cares only about reference type variables)
    步骤如下:
    1. Marking Phase
      CLR first suspends all threads in the process(prevents threads from accessing objects and changing their state while the CLR examines them)
      Marking All objects to 0(means all objects should be deleted)
      Scan active roots to marking object(not mark the same object again to avoid circular references)
      标记阶段,首先悬挂所有线程防止访问Objects和相关状态。
      然后标记所有在堆上对象的引用为0,然后扫描所有active的roots(即reference type variables – 引用类型的变量),如果有roots指向任何一个堆上的Object,就标记该Object并对该Object内部的roots进行扫描标记。这里最重要的一点就是对标记过的Object不会再扫描内部root(比如有roots指向了A,我们标记了A,然后检查A内部发现B,因为B还没被标记所以标记B并检查B内部,在B内部又发现了A但因为A已经被标记了,所以不会再次标记A,这样一来如果最初指向A的roots不存在了的话,A和B都会因为没有引用不会被标记而清除。这样一来就避免了Circular references)
      Note:
      “Refer to all reference type variables as roots.”
    2. Compacting Phase
      Shifts the memory consumed by the marked objects down in the heap, compacting all the surviving objects together so that they are contiguous in memory.(reduce application’s working set size &access fast in future & no address space fragmentation issues)
      CLR resumes all the application’s threads and they continue to access the objects as if the GC never happened at all
      Note:
      A static field keeps whatever object it refers to forever or until the AppDomain that the types are loaded into is unloaded
      在标记阶段完成后,所有标记为0的堆上对象内存都会被回收。
      压缩阶段主要是为了内存的高效利用(防止内存碎片化)。
      要注意的是静态变量在内存中的位置不会改变。

接下来看看如何提升GC的performance:
CLR’s GC assumptions(提升GC性能的最基本假设):
The newer an object is, the shorter its lifetime will be(越新的对象lifetime越短)
The older an object is, the longer its lifetime will be(越旧的对象lifetime越长)
Collecting a portion of the heap is faster than collecting the whole heap(GC一部分heap比GC所有heap快)
基于上述理论:
Heap被分为了Generation 0,1,2。
GCGenerations
最初创建的对象会存放在generation 0,GC首先检查Generation 0的对象,objects在通过第一次GC后会提升到generation 1,当generation 1对象数量超过generation 0的时候,GC就会检查generation 0和1,同理当 object从generation 1存活下来后会被存放到generation 2。
通过上述方式,我们GC就不必每次都对整个heap的对象进行检查以达到GC优化的目的。
Note:
The Managed heap supports only three generations: generation 0,1,2
The garbage collector fine-tunes itself automatically based on the memory load required by your application.
关于更多GC的学习详情参考《CLR vir C#》 – The Managed Heap and Garbage Collection章节
Note:
Finalize methods are called at the completion of a garbage collection on objects that the GC has determined to be garbage.(Object的Finalize方法是在Object在完成内存被回收之前调用)
Finalize is not equal to destructor in C++(Finalize!=C++的析构函数)

Threading

What is Thread?
“A thread is a Windows concept whose job is to virtualize the CPU. “

Major parts of Thread:

  1. Thread kernel object(“data structes contains a bunch of properties that describe the thread”描述线程信息的数据对象)
  2. Thread environment block(TEB)(“The TEB contains the head of the thread’s exception-handling chain. In addition, the TEB containes the thread’s thread-local storage data and some data structures for use by GDI and OpenGL graphics”)
  3. User-mode stack(“The use-mode stack is used for local variables and arguments passed to methods. It also contains the address indicating what the thread should execute next when the current method returns”)
  4. Kernel-mode stack(“The kernel-mode stack is also used when application code passes arguments to a krenel-mode function in the operating system”)
  5. DLL thread-attach and thread-detach notifications

Why do we need Thread?
Benfits:

  1. Responsiveness(同一时间只能运行一个程序,单核CPU一个程序卡死就会导致整个电脑卡死)
  2. Data safe(程序卡死后重启会导致数据丢失)
  3. Performance(多核CPU可以同时执行多个任务,使得处理任务更高效)

Shorcomings:

  1. Threads consume a lot of memory and require time to create, destroy…..(创建和销毁费时,并且消耗大量内存)
  2. Context switches(Change to other thread) takes much time(Thread切换费时)

How to use Thread correctly?
“Have the number of thread that is no more than the number of CPUs on that machine.”

Note:
“A CLR thread is identical to a Windows thread”

Thread Scheduling and Priorities

待续……

C# In Depth(third edition)

C#1

Non Generic Collections

1
ArrayList list = new ArrayList();

Sorting an ArrayList using IComparer


C#2

Strongly Typed Collections

1
List<T> list = new List<T>();

Sorting an List using IComparer or Comparision

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class ProductNameComparer : IComparer<Product>
{
public int Compare(Product p1, Product p2)
{
return p1.Name.CompareTo(p2.Name);
}
}
// IComparer<T>
List<Product> products = new List<Product>();
products.Sort(new ProductNameComparer());
// Comparison<T>
products.Sort(delegate(Product x, Product y)
{
return x.Name.CompareTo(y.Name);
});

Nullable Value Type

1
decimal? price = null;

C#3

Properties

Automatically Implementaed Properties

1
2
3
4
class ClassName
{
public type PropertyName{get;set;}
}

Sorting using Comparision from a lambda expression

1
2
List<Product> products = new List<Product>();
products.Sort((x, y) => x.Name.CompareTo(y.Name));

Extension Method

1
2
3
4
5
public static class StringExtension
public static int getLength(this string s)
{
return s.Length;
}

LINQ(Language-Integrated Query)

“LINQ is at the heart of the changes in C# 3. The aim is to make it easy to write queries against multiple data sources with consistent syntax and features, in a readable and composable fashion.”

1
2
3
4
List<Product> products = new List<Product>();
var filtered = from Product p in products
where p.Price > 10
select p;

C#4

Named Arguments

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class Product
{
public string Name
{
get { return name; }
}
readonly string name;

public Product(string name)
{
this.name = name;
}
}

Product product = new Product( name : "TonyTang"),

Optional Parameters

1
2
3
4
5
public int Sum(int a,int b = 0)
{
return a + b;
}
var sum = Sum(1);

DLR(Dynamic Language Runtime)

CSharp Evolution

CSharpEvolution1
CSharpEvolution2
CSharpEvolution3

参考书籍下载:
《C#入门经典第五版》
《CLR Via C# Fourth Edition》 - Jeffrey Richter
《C# in Depth 3rd Edition》 - Jon Skeet

Unity Introduction

因为后期发现目录越来越长导致前面的一些代码模块不具可读性,所以在这里加很多…过度符
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..
……………………………………………..

What is Unity?

Unity is a cross-platform game engine developed by Unity Technologies and used to develop video games for PC, consoles, mobile devices and websites. First released on June 8, 2005
(from wiki)

Why should we study Unity?

  1. Cross-platform
  2. Free
  3. Complete SDK documentation
  4. Many free assets

Which programming language are we using?

  1. C# (推荐入门书籍《C#入门经典》,进阶书籍《CLR via C#》)
  2. Javascript
  3. Boo

Unity Tutorial Study

Website: Unity Tutorials

ROLL-A-BALL

Game Introduction
控制小球移动收集场景里的方块,UI会显示当前收集的方块数量,当所有方块通过碰撞收集后游戏UI提示You Win

Code
PlayerController.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
using UnityEngine;
using System.Collections;
using UnityEngine.UI;

public class PlayerController : MonoBehaviour {
public float m_Speed;

public Text m_CountText;

public Text m_WinText;

private Rigidbody m_RB;

private int m_Count;

void Start()
{
m_RB = GetComponent<Rigidbody> ();
m_Count = 0;
SetCountText();
m_WinText.text = "";
}

void FixedUpdate()
{
float moveHorizontal = Input.GetAxis ("Horizontal");
float moveVertical = Input.GetAxis ("Vertical");

Vector3 movement = new Vector3(moveHorizontal, 0.0f, moveVertical);
m_RB.AddForce (movement * m_Speed);
}

void OnTriggerEnter(Collider other)
{
if (other.gameObject.CompareTag ("Pickup"))
{
other.gameObject.SetActive(false);
m_Count++;
SetCountText();
}
}

void SetCountText()
{
m_CountText.text = "Count: " + m_Count.ToString();
if (m_Count >= 12)
{
m_WinText.text = "You Win!";
}
}
}

CameraController.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
using UnityEngine;
using System.Collections;

public class CameraController : MonoBehaviour {
public GameObject m_Player;

private Vector3 m_Offset;

// Use this for initialization
void Start () {
m_Offset = transform.position - m_Player.transform.position;
}

// Update is called once per frame
void LateUpdate () {
transform.position = m_Player.transform.position + m_Offset;
}
}

Rotator.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
using UnityEngine;
using System.Collections;

public class Rotator : MonoBehaviour {

// Use this for initialization
void Start () {

}

// Update is called once per frame
void Update () {
transform.Rotate (new Vector3 (15, 35, 45) * Time.deltaTime);
}
}

Captures
Game Play
Roll_A_Ball_Game_Play
Win Game
Roll_A_Ball_Game_Win

SPACE SHOOTER

Game Introduction
Top Down Game
好比全民打飞机

Code
ShipController.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
using UnityEngine;
using System.Collections;

[System.Serializable]
public class Boundary
{
public float m_MinX,m_MaxX,m_MinZ,m_MaxZ;
}

public class ShipController : MonoBehaviour {

public float m_Speed = 8;

public float m_Tilt = 4;

public float fireRate = 0.5F;

private float nextFire = 0.0F;

public Boundary m_Boundary;

public GameObject m_Shot;

public Transform m_ShotSpawn;

private AudioSource m_FireAudio;

private Rigidbody m_ShipRB;

private GameController m_GameController;

// Use this for initialization
void Start () {
m_ShipRB = GetComponent<Rigidbody> ();
m_FireAudio = GetComponent<AudioSource> ();
GameObject gamecontrollerobject = GameObject.FindGameObjectWithTag ("GameController");
if (gamecontrollerobject != null) {
m_GameController = gamecontrollerobject.GetComponent<GameController>();
}
if (m_GameController == null) {
Debug.Log("m_GameController == null in ShipController::Start()");
}
}

void Update(){
if (Input.GetKey(KeyCode.J) && Time.time > nextFire) {
nextFire = Time.time + fireRate;
GameObject clone = Instantiate (m_Shot, m_ShotSpawn.position, m_ShotSpawn.rotation) as GameObject;
m_FireAudio.Play();
}
}

void FixedUpdate(){
if (!m_GameController.IsGameEnd ()) {
float moveHorizontal = Input.GetAxis ("Horizontal");
float moveVertical = Input.GetAxis ("Vertical");
Vector3 movement = new Vector3 (moveHorizontal, 0.0f, moveVertical);
m_ShipRB.velocity = movement * m_Speed;

m_ShipRB.position = new Vector3 (
Mathf.Clamp (m_ShipRB.position.x, m_Boundary.m_MinX, m_Boundary.m_MaxX),
0.0f,
Mathf.Clamp (m_ShipRB.position.z, m_Boundary.m_MinZ, m_Boundary.m_MaxZ)
);
m_ShipRB.rotation = Quaternion.Euler (0.0f, 0.0f, -m_ShipRB.velocity.x * m_Tilt);
} else {
m_ShipRB.rotation = Quaternion.Euler(0.0f,0.0f,0.0f);
m_ShipRB.velocity = new Vector3(0.0f,0.0f,0.0f);
}
}
}

GameController.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
using UnityEngine;
using System.Collections;
using UnityEngine.UI;

public class GameController : MonoBehaviour {

public GameObject m_Hazard;

public Vector3 m_SpawnValue = new Vector3(5.5f,0.0f,8.0f);

public int m_HazardCount = 4;

public float m_SpawnWait = 1.0f;

public float m_StartWait = 3.0f;

public float m_WaveWait = 4.0f;

public Text m_ScoreText;

public Text m_WinText;

public Button m_RestartButton;

public int m_WinningScore = 200;

private int m_Score = 0;

private bool m_IsGameEnd = false;

private bool m_RestartGame = false;

private AudioSource m_BackgroundAudio;

// Use this for initialization
void Start () {
StartCoroutine (SpawnAsteriod());
UpdateScore ();
m_WinText.text = "";
m_RestartButton.gameObject.SetActive (false);
m_RestartButton.onClick.AddListener (RestartGame);
m_BackgroundAudio = GetComponent<AudioSource> ();
}

void Update()
{
if (m_RestartGame) {
Debug.Log("Restart Game Now");
Application.LoadLevel(Application.loadedLevel);
}
}

public bool IsGameEnd()
{
return m_IsGameEnd;
}

private void RestartGame()
{
Debug.Log("Restart Button clicked");
m_RestartGame = true;
}

IEnumerator SpawnAsteriod(){
yield return new WaitForSeconds (m_StartWait);
while (true) {
for (int i = 0; i < m_HazardCount; i++) {
Vector3 spawnposition = new Vector3 (Random.Range (-m_SpawnValue.x, m_SpawnValue.x), 0.0f, m_SpawnValue.z);
Quaternion spawnrotation = Quaternion.identity;
Instantiate(m_Hazard,spawnposition,spawnrotation);
yield return new WaitForSeconds (m_SpawnWait);
}
yield return new WaitForSeconds (m_WaveWait);
if(m_IsGameEnd)
{
break;
}
}
}

public void AddScore(int score)
{
m_Score += score;
UpdateScore ();
}

void UpdateScore()
{
m_ScoreText.text = "Score: " + m_Score;
if (m_Score >= m_WinningScore) {
m_WinText.text = "Congratulation! You Win";
m_IsGameEnd = true;
m_RestartButton.gameObject.SetActive (true);
m_BackgroundAudio.Stop();
}
}
}

DestroyByContact.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
using UnityEngine;
using System.Collections;

public class DestroyByContact : MonoBehaviour {

public GameObject m_ExplosionObject;

public GameObject m_PlayerExplosionObject;

private GameController m_GameController;

public int m_ScoreValue = 10;

void Start()
{
GameObject gamecontrollerobject = GameObject.FindGameObjectWithTag ("GameController");
if (gamecontrollerobject != null) {
m_GameController = gamecontrollerobject.GetComponent<GameController>();
}
if (m_GameController == null) {
Debug.Log("m_GameController == null");
}
}

void OnTriggerEnter(Collider other) {
if (other.tag == "Boundary") {
return ;
}
Debug.Log ("other.tag = " + other.tag);

Instantiate (m_ExplosionObject, transform.position, transform.rotation);
if (other.tag == "Player") {
Instantiate (m_PlayerExplosionObject, other.transform.position, other.transform.rotation);
}
if (other.tag == "Bullet") {
Debug.Log("Asteriod is destroied by Bullet");
m_GameController.AddScore(m_ScoreValue);
}
Destroy(other.gameObject);
Destroy (gameObject);
}
}

DestroyByTime.cs

1
2
3
4
5
6
7
8
9
10
11
12
using UnityEngine;
using System.Collections;

public class DestroyByTime : MonoBehaviour {

public float m_LifeTime = 5.0f;

// Use this for initialization
void Start () {
Destroy (gameObject,m_LifeTime);
}
}

GameBoundary.cs

1
2
3
4
5
6
7
8
9
using UnityEngine;
using System.Collections;

public class GameBoundary : MonoBehaviour {

void OnTriggerExit(Collider other) {
Destroy(other.gameObject);
}
}

Mover.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
using UnityEngine;
using System.Collections;

public class Mover : MonoBehaviour {

public float m_Speed = 8;

private Rigidbody m_RigidBody;

private GameController m_GameController;

// Use this for initialization
void Start () {
m_RigidBody = GetComponent<Rigidbody> ();
m_RigidBody.velocity = transform.forward * m_Speed;
GameObject gamecontrollerobject = GameObject.FindGameObjectWithTag ("GameController");
if (gamecontrollerobject != null) {
m_GameController = gamecontrollerobject.GetComponent<GameController>();
}
if (m_GameController == null) {
Debug.Log("m_GameController == null in ShipController::Start()");
}
}

void Update(){
if (m_GameController.IsGameEnd ())
{
m_RigidBody.rotation = Quaternion.Euler(0.0f,0.0f,0.0f);
m_RigidBody.velocity = new Vector3(0.0f,0.0f,0.0f);
}
}
}

RandomRotator.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
using UnityEngine;
using System.Collections;

public class RandomRotator : MonoBehaviour {

public float m_Tumble = 5;

private Rigidbody m_Rigidbody;

// Use this for initialization
void Start () {
m_Rigidbody = GetComponent<Rigidbody> ();
m_Rigidbody.angularVelocity = Random.insideUnitSphere * m_Tumble;
}

// Update is called once per frame
void Update () {

}
}

Captures
Game Play
Space_Shooter_Game_Play
Win Game
Space_Shooter_Game_Win

Survival Shooter
Game Introduction
固定(2.5D – isometric projection)第三人称视角游戏

Code
CameraFollow.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
using UnityEngine;
using System.Collections;


public class CameraFollow : MonoBehaviour {
public Transform m_Target;

public float m_Smoothing = 5.0f;

Vector3 m_Offset;

// Use this for initialization
void Start () {
m_Offset = transform.position - m_Target.position;
}

// Update is called once per frame
void Update () {
Vector3 targetCamPos = m_Target.position + m_Offset;
transform.position = Vector3.Lerp (transform.position, targetCamPos, m_Smoothing * Time.deltaTime);
}
}

PlayerShooting.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
using UnityEngine;

public class PlayerShooting : MonoBehaviour
{
public int damagePerShot = 20;
public float timeBetweenBullets = 0.15f;
public float range = 100f;




float timer;
Ray shootRay;
RaycastHit shootHit;
int shootableMask;
ParticleSystem gunParticles;
LineRenderer gunLine;
AudioSource gunAudio;
Light gunLight;
float effectsDisplayTime = 0.2f;

void Awake ()
{
shootableMask = LayerMask.GetMask ("Shootable");
gunParticles = GetComponent<ParticleSystem> ();
gunLine = GetComponent <LineRenderer> ();
gunAudio = GetComponent<AudioSource> ();
gunLight = GetComponent<Light> ();
}

void Update ()
{
timer += Time.deltaTime;

if(Input.GetButton ("Fire1") && timer >= timeBetweenBullets && Time.timeScale != 0)
{
Shoot ();
}

if(timer >= timeBetweenBullets * effectsDisplayTime)
{
DisableEffects ();
}
}

public void DisableEffects ()
{
gunLine.enabled = false;
gunLight.enabled = false;
}

void Shoot ()
{
timer = 0f;

gunAudio.Play ();

gunLight.enabled = true;

gunParticles.Stop ();
gunParticles.Play ();

gunLine.enabled = true;
gunLine.SetPosition (0, transform.position);

shootRay.origin = transform.position;
shootRay.direction = transform.forward;

if(Physics.Raycast (shootRay, out shootHit, range, shootableMask))
{
EnemyHealth enemyHealth = shootHit.collider.GetComponent <EnemyHealth> ();
Debug.Log("Shootting");
Debug.Log("shootHit.collider.name = " + shootHit.collider.name);
if(enemyHealth != null)
{
Debug.Log("enermyHealth != null");
enemyHealth.TakeDamage (damagePerShot, shootHit.point);
}
gunLine.SetPosition (1, shootHit.point);
}
else
{
Debug.Log("Not shot");
gunLine.SetPosition (1, shootRay.origin + shootRay.direction * range);
}
}
}

PlayerMovement.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
using UnityEngine;

public class PlayerMovement : MonoBehaviour
{
public float m_Speed = 6.0f;

Vector3 m_Movement;

Animator m_Anim;

Rigidbody m_PlayerRigidbody;

int m_FloorMask;

float m_CamRayLength = 100.0f;

void Awake()
{
m_FloorMask = LayerMask.GetMask ("Floor");

m_Anim = GetComponent<Animator> ();

m_PlayerRigidbody = GetComponent<Rigidbody> ();
}


void FixedUpdate()
{
float h = Input.GetAxisRaw ("Horizontal");
float v = Input.GetAxisRaw ("Vertical");

Move (h, v);
Turning ();
Animating(h,v);
}

void Move(float h, float v)
{
m_Movement.Set (h, 0.0f, v);
m_Movement = m_Movement.normalized * m_Speed * Time.deltaTime;
m_PlayerRigidbody.MovePosition (transform.position + m_Movement);
}


void Turning()
{
Ray camRay = Camera.main.ScreenPointToRay (Input.mousePosition);

RaycastHit floorHit;

if(Physics.Raycast(camRay,out floorHit,m_CamRayLength, m_FloorMask))
{
Vector3 playerToMouse = floorHit.point - transform.position;
playerToMouse.y = 0.0f;


Quaternion newRotation = Quaternion.LookRotation(playerToMouse);
m_PlayerRigidbody.MoveRotation(newRotation);
}
}

void Animating(float h, float v)
{
bool walking = (h != 0.0f || v != 0.0f);
m_Anim.SetBool ("IsWalking", walking);
}
}

PlayerHealth.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
using UnityEngine;
using UnityEngine.UI;
using System.Collections;

public class PlayerHealth : MonoBehaviour
{
public int startingHealth = 100;
public int currentHealth;
public Slider healthSlider;
public Image damageImage;
public AudioClip deathClip;
public float flashSpeed = 5f;
public Color flashColour = new Color(1f, 0f, 0f, 0.1f);

Animator anim;
AudioSource playerAudio;
PlayerMovement playerMovement;
//PlayerShooting playerShooting;
bool isDead;
bool damaged;

void Awake ()
{
anim = GetComponent <Animator> ();
playerAudio = GetComponent <AudioSource> ();
playerMovement = GetComponent <PlayerMovement> ();
//playerShooting = GetComponentInChildren <PlayerShooting> ();
currentHealth = startingHealth;
}

void Update ()
{
if(damaged)
{
damageImage.color = flashColour;
}
else
{
damageImage.color = Color.Lerp (damageImage.color, Color.clear, flashSpeed * Time.deltaTime);
}
damaged = false;
}

public void TakeDamage (int amount)
{
damaged = true;

currentHealth -= amount;

healthSlider.value = currentHealth;

playerAudio.Play ();

if(currentHealth <= 0 && !isDead)
{
Death ();
}
}

void Death ()
{
isDead = true;

//playerShooting.DisableEffects ();

anim.SetTrigger ("Die");

playerAudio.clip = deathClip;
playerAudio.Play ();

playerMovement.enabled = false;
//playerShooting.enabled = false;
}

public void RestartLevel ()
{
Application.LoadLevel (Application.loadedLevel);
}
}

EnemyMovement.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
using UnityEngine;
using System.Collections;

public class EnemyMovement : MonoBehaviour
{
Transform player;
PlayerHealth playerHealth;
EnemyHealth enemyHealth;
NavMeshAgent nav;

void Awake ()
{
player = GameObject.FindGameObjectWithTag ("Player").transform;
playerHealth = player.GetComponent <PlayerHealth> ();
enemyHealth = GetComponent <EnemyHealth> ();
nav = GetComponent <NavMeshAgent> ();
}

void Update ()
{
if(enemyHealth.currentHealth > 0 && playerHealth.currentHealth > 0)
{
nav.SetDestination (player.position);
}
else
{
nav.enabled = false;
}
}
}

EnemyHealth.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
using UnityEngine;

public class EnemyHealth : MonoBehaviour
{
public int startingHealth = 100;
public int currentHealth;
public float sinkSpeed = 2.5f;
public int scoreValue = 10;
public AudioClip deathClip;

Animator anim;
AudioSource enemyAudio;
ParticleSystem hitParticles;
CapsuleCollider capsuleCollider;
bool isDead;
bool isSinking;

void Awake ()
{
anim = GetComponent <Animator> ();
enemyAudio = GetComponent <AudioSource> ();
hitParticles = GetComponentInChildren <ParticleSystem> ();
capsuleCollider = GetComponent <CapsuleCollider> ();

currentHealth = startingHealth;
isDead = false;
}

void Update ()
{
if(isSinking)
{
transform.Translate (-Vector3.up * sinkSpeed * Time.deltaTime);
}
}

public void TakeDamage (int amount, Vector3 hitPoint)
{
Debug.Log ("isDead = " + isDead);
if(isDead)
return;

enemyAudio.Play ();

currentHealth -= amount;

hitParticles.transform.position = hitPoint;
hitParticles.Play();

if(currentHealth <= 0)
{
Death ();
}
}

void Death ()
{
isDead = true;

capsuleCollider.isTrigger = true;

anim.SetTrigger ("Dead");

enemyAudio.clip = deathClip;
enemyAudio.Play ();
}

public void StartSinking ()
{
GetComponent <NavMeshAgent> ().enabled = false;
GetComponent <Rigidbody> ().isKinematic = true;
isSinking = true;
ScoreManager.score += scoreValue;
Destroy (gameObject, 2f);
}
}

EnemyAttack.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
using UnityEngine;
using System.Collections;

public class EnemyAttack : MonoBehaviour
{
public float timeBetweenAttacks = 0.5f;
public int attackDamage = 10;

public float validAttackDistance = 1.0f;

Animator anim;
GameObject player;
PlayerHealth playerHealth;
EnemyHealth enemyHealth;
bool playerInRange;
float timer;

void Awake ()
{
player = GameObject.FindGameObjectWithTag ("Player");
playerHealth = player.GetComponent <PlayerHealth> ();
enemyHealth = GetComponent<EnemyHealth>();
anim = GetComponent <Animator> ();
}

/*
void OnTriggerEnter (Collider other)
{
if(other.gameObject == player)
{
playerInRange = true;
}
}

void OnTriggerExit (Collider other)
{
if(other.gameObject == player)
{
playerInRange = false;
}
}
*/

void OnCollisionEnter(Collision collision) {
if(collision.gameObject == player)
{
playerInRange = true;
}
}

void OnCollisionExit(Collision collision) {
if(collision.gameObject == player)
{
playerInRange = false;
}
}

void Update ()
{
timer += Time.deltaTime;

if(timer >= timeBetweenAttacks && playerInRange && enemyHealth.currentHealth > 0)
{
Attack ();
}

if(playerHealth.currentHealth <= 0)
{
anim.SetTrigger ("PlayerDead");
}
}

void Attack ()
{
timer = 0f;

if(playerHealth.currentHealth > 0)
{
playerHealth.TakeDamage (attackDamage);
}
}
}

ScoreManager.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
using UnityEngine;
using UnityEngine.UI;
using System.Collections;

public class ScoreManager : MonoBehaviour
{
public static int score;

Text text;

void Awake ()
{
text = GetComponent <Text> ();
score = 0;
}

void Update ()
{
text.text = "Score: " + score;
}
}

EnemyManager.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
using UnityEngine;

public class EnemyManager : MonoBehaviour
{
public PlayerHealth playerHealth;
public GameObject enemy;
public float spawnTime = 3f;
public Transform[] spawnPoints;

void Start ()
{
InvokeRepeating ("Spawn", spawnTime, spawnTime);
}

void Spawn ()
{
if(playerHealth.currentHealth <= 0f)
{
return;
}

int spawnPointIndex = Random.Range (0, spawnPoints.Length);

Instantiate (enemy, spawnPoints[spawnPointIndex].position, spawnPoints[spawnPointIndex].rotation);
}
}

GameOverManager.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
using UnityEngine;

public class GameOverManager : MonoBehaviour
{
public PlayerHealth playerHealth;

Animator anim;

void Awake()
{
anim = GetComponent<Animator>();
}

void Update()
{
Debug.Log ("playerHealth.currentHealth = " + playerHealth.currentHealth);
if (playerHealth.currentHealth <= 0)
{
anim.SetTrigger("GameOver");
}
}
}

Captures
Game(因为一些原因没有截图)
Survival_Shooter_Game
图片来源

2D-ROGUELIKE-TUTORIAL

Game Introduction
Over the course of the project will create procedural tile based levels, implement turn based movement, add a hunger system, audio and mobile touch controls.

Code
BoardManager.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
using UnityEngine;
using System;
using System.Collections.Generic;
using Random = UnityEngine.Random;

public class BoardManager : MonoBehaviour {
[Serializable]
public class Count
{
public int minimum;
public int maximum;

public Count(int min, int max)
{
minimum = min;
maximum = max;
}
}

public int columns = 8;
public int rows = 8;
public Count wallCount = new Count(5,9);
public Count foodCount = new Count(1,5);
public GameObject exit;
public GameObject[] floorTiles;
public GameObject[] wallTiles;
public GameObject[] foodTiles;
public GameObject[] enemyTiles;
public GameObject[] outerwallTiles;

private Transform boardHolder;
private List<Vector3> gridPositions = new List<Vector3>();

void InitialiseList()
{
gridPositions.Clear ();

for (int x = 1; x < columns - 1; x++)
{
for(int y = 1; y < rows - 1; y++)
{
gridPositions.Add(new Vector3(x,y,0f));
}
}
}

void BoardSetup()
{
boardHolder = new GameObject ("Board").transform;

for(int x = -1; x < columns + 1; x++)
{
for(int y = -1; y < rows + 1; y++)
{
GameObject toInstantiate = floorTiles[Random.Range (0,floorTiles.Length)];
if(x == -1 || x == columns || y == -1 || y == rows)
{
toInstantiate = outerwallTiles[Random.Range(0,outerwallTiles.Length)];
}

GameObject instance = Instantiate (toInstantiate, new Vector3(x,y,0f),Quaternion.identity) as GameObject;

instance.transform.SetParent(boardHolder);
}
}
}

Vector3 RandomPosition()
{
int randomIndex = Random.Range (0, gridPositions.Count);
Vector3 randomPosition = gridPositions [randomIndex];
gridPositions.RemoveAt(randomIndex);
return randomPosition;
}

void LayoutObjectAtRandom(GameObject[] tileArray, int minimum, int maximum)
{
int objectCount = Random.Range (minimum, maximum + 1);
for (int i = 0; i < objectCount; i++) {
Vector3 randomPosition = RandomPosition();
GameObject tileChoice = tileArray[Random.Range (0, tileArray.Length)];
Instantiate(tileChoice, randomPosition, Quaternion.identity);
}
}

public void SetupScene(int level)
{
BoardSetup ();
InitialiseList ();
LayoutObjectAtRandom (wallTiles, wallCount.minimum, wallCount.maximum);
LayoutObjectAtRandom (foodTiles, foodCount.minimum, foodCount.maximum);
int enemyCount = (int)Mathf.Log (level, 2f);
LayoutObjectAtRandom (enemyTiles, enemyCount, enemyCount);
Instantiate (exit, new Vector3 (columns - 1, rows - 1, 0f), Quaternion.identity);
}
}

GameManager.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
using UnityEngine.UI;

public class GameManager : MonoBehaviour {

public float levelStartDelay = 2f;

public float turnDelay = 0.1f;

public static GameManager instance = null;

public BoardManager boardScript;

public int playerFoodPoints = 30;

[HideInInspector]public bool playerTurn = true;

private Text levelText;
private GameObject levelImage;
private int level = 1;
private bool doingSetup;

private List<Enemy> enemies;
private bool enemiesMoving;

void Awake()
{
if (instance == null)
{
instance = this;
} else if (instance != this)
{
Destroy (gameObject);
}

DontDestroyOnLoad(gameObject);

enemies = new List<Enemy> ();

boardScript = GetComponent<BoardManager> ();
InitGame ();
}

private void OnLevelWasLoaded(int index)
{
level++;

InitGame ();
}

void InitGame()
{
doingSetup = true;
levelImage = GameObject.Find ("LevelImage");
levelText = GameObject.Find ("LevelText").GetComponent<Text> ();
levelText.text = "Day " + level;
levelImage.SetActive (true);
Invoke ("HideLevelImage", levelStartDelay);

enemies.Clear ();
boardScript.SetupScene (level);
}

private void HideLevelImage()
{
levelImage.SetActive (false);
doingSetup = false;
}

public void GameOver()
{
levelText.text = "After " + level + " days, you starved.";
levelImage.SetActive (true);
enabled = false;
}

// Update is called once per frame
void Update () {
if (playerTurn || enemiesMoving || doingSetup ) {
return ;
}

StartCoroutine (MoveEnemies ());
}

public void AddEnemyToList(Enemy script)
{
enemies.Add (script);
}

IEnumerator MoveEnemies()
{
enemiesMoving = true;
yield return new WaitForSeconds(turnDelay);

if (enemies.Count == 0) {
yield return new WaitForSeconds(turnDelay);
}

for (int i = 0; i < enemies.Count; i++) {
enemies[i].MoveEnemy();
yield return new WaitForSeconds(turnDelay);
}

Debug.Log ("GameManager::MoveEnemies");
playerTurn = true;
enemiesMoving = false;
}
}

MovingObject.cs

1

BoardManager.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
using UnityEngine;
using System.Collections;

public abstract class MovingObject : MonoBehaviour {

public float moveTime = 0.1f;
public LayerMask blockingLayer;

private BoxCollider2D BoxCollider;
private Rigidbody2D rb2D;
private float inverseMoveTime;

// Use this for initialization
protected virtual void Start () {
BoxCollider = GetComponent<BoxCollider2D> ();
rb2D = GetComponent<Rigidbody2D> ();
inverseMoveTime = 1f / moveTime;
}

protected bool Move(int xDir, int yDir, out RaycastHit2D hit)
{
Vector2 start = transform.position;
Vector2 end = start + new Vector2 (xDir, yDir);

BoxCollider.enabled = false;
hit = Physics2D.Linecast (start, end, blockingLayer);
BoxCollider.enabled = true;

if( hit.transform == null)
{
StartCoroutine(SmoothMovement(end));
return true;
}

return false;
}

protected IEnumerator SmoothMovement(Vector3 end)
{
float sqrRemainingDistance = (transform.position - end).sqrMagnitude;

while (sqrRemainingDistance > float.Epsilon) {
Vector3 newPosition = Vector3.MoveTowards(rb2D.position, end, inverseMoveTime * Time.deltaTime);
rb2D.MovePosition(newPosition);
sqrRemainingDistance = (transform.position - end).sqrMagnitude;
yield return null;
}
}

protected virtual void AttemptMove<T>(int xDir, int yDir) where T : Component
{
RaycastHit2D hit;
bool canMove = Move (xDir, yDir, out hit);
if (hit.transform == null) {
return ;
}

T hitComponent = hit.transform.GetComponent<T> ();

if (!canMove && hitComponent != null) {
OnCanMove(hitComponent);
}
}

protected abstract void OnCanMove<T>(T component) where T : Component;
}

Player.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
using UnityEngine;
using System.Collections;
using UnityEngine.UI;

public class Player : MovingObject {

public int wallDamage = 1;
public int pointsPerFood = 10;
public int pointsPerSoda = 20;
public float restartLevelDelay = 1f;
public Text foodText;

public AudioClip moveSound1;
public AudioClip moveSound2;
public AudioClip eatSound1;
public AudioClip eatSound2;
public AudioClip drinkSound1;
public AudioClip drinkSound2;
public AudioClip gameOverSound;

private Animator animator;

private int food;

private Vector2 touchOrigin = -Vector2.one;

protected override void Start()
{
foodText.text = "Food:" + food;

animator = GetComponent<Animator> ();

food = GameManager.instance.playerFoodPoints;

base.Start();
}

private void OnDisable()
{
GameManager.instance.playerFoodPoints = food;
}

// Update is called once per frame
void Update () {
if (!GameManager.instance.playerTurn) {
return ;
}

Debug.Log ("Player::Update() called");
int horizontal = 0;
int vertical = 0;

#if UNITY_EDITOR || UNITY_STANDLONE || UNITY_WEBPLAYER
horizontal = (int)Input.GetAxisRaw ("Horizontal");
vertical = (int)Input.GetAxisRaw ("Vertical");

if (horizontal != 0) {
vertical = 0;
}

#else

if(Input.touchCount > 0)
{
Touch myTouch = Input.touches[0];
if(myTouch.phase == TouchPhase.Began)
{
touchOrigin = myTouch.position;
}
else if(myTouch.phase == TouchPhase.Ended && touchOrigin.x >= 0)
{
Vector2 touchEnd = myTouch.position;
float x = touchEnd.x - touchOrigin.x;
float y = touchEnd.y - touchOrigin.y;
touchOrigin.x = -1;
if(Mathf.Abs(x) > Mathf.Abs(y))
{
horizontal = x > 0 ? 1 : -1;
}
else
{
vertical = y > 0 ? 1 : -1;
}
}
}
#endif

if (horizontal != 0 || vertical != 0 ) {
AttemptMove<Wall>(horizontal, vertical);
}
}

protected override void AttemptMove<T>(int xDir, int yDir)
{
food--;
foodText.text = "Food:" + food;

base.AttemptMove<T> (xDir, yDir);

RaycastHit2D hit;
if(Move (xDir, yDir, out hit))
{
SoundManager.instance.RandomizeSfx(moveSound1,moveSound2);
}

CheckIfGameOver ();

GameManager.instance.playerTurn = false;
}

private void OnTriggerEnter2D(Collider2D other)
{
if (other.tag == "Exit") {
Invoke ("Restart", restartLevelDelay);
enabled = false;
} else if (other.tag == "Food") {
food += pointsPerFood;
other.gameObject.SetActive(false);
foodText.text = "+:" + pointsPerFood + "Food: " + food;
SoundManager.instance.RandomizeSfx(drinkSound1,drinkSound2);
} else if (other.tag == "Soda") {
food += pointsPerSoda;
other.gameObject.SetActive(false);
foodText.text = "+:" + pointsPerSoda + "Food: " + food;
SoundManager.instance.RandomizeSfx(drinkSound1,drinkSound2);

}
}

protected override void OnCanMove<T>(T component)
{
Wall hitWall = component as Wall;
hitWall.DamageWall (wallDamage);
animator.SetTrigger ("PlayerChop");
}

private void Restart()
{
Application.LoadLevel (Application.loadedLevel);
}

public void LoseFood(int loss)
{
animator.SetTrigger ("PlayerHit");
food -= loss;
foodText.text = "-:" + loss + "Food: " + food;
CheckIfGameOver();
}

private void CheckIfGameOver()
{
if (food <= 0) {
SoundManager.instance.PlaySingle(gameOverSound);
SoundManager.instance.musicSource.Stop();
GameManager.instance.GameOver();
}
}
}

Enemy.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
using UnityEngine;
using System.Collections;

public class Enemy : MovingObject {

public int playerDamager;

private Animator animator;
private Transform target;
private bool skipMove;

public AudioClip enemyAttack1;
public AudioClip enemyAttack2;

protected override void Start () {
GameManager.instance.AddEnemyToList (this);
animator = GetComponent<Animator> ();
target = GameObject.FindGameObjectWithTag ("Player").transform;
base.Start ();
}

protected override void AttemptMove<T>(int xDir, int yDir)
{
if (skipMove) {
skipMove = false;
return;
}

base.AttemptMove<T> (xDir, yDir);

skipMove = true;
}

public void MoveEnemy()
{
int xDir = 0;
int yDir = 0;

if (Mathf.Abs (target.position.x - transform.position.x) < float.Epsilon) {
yDir = target.position.y > transform.position.y ? 1 : -1;
} else {
xDir = target.position.x > transform.position.x ? 1 : -1;
}

AttemptMove<Player> (xDir, yDir);
}

protected override void OnCanMove<T>(T component)
{
Player hitPlayer = component as Player;

animator.SetTrigger ("enemyAttack");

hitPlayer.LoseFood (playerDamager);

SoundManager.instance.RandomizeSfx (enemyAttack1, enemyAttack2);
}
}

SoundManager.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
using UnityEngine;
using System.Collections;

public class SoundManager : MonoBehaviour {

public AudioSource efxSource;
public AudioSource musicSource;
public static SoundManager instance = null;

public float lowPitchRange = 0.95f;
public float highPitchRange = 1.05f;

void Awake()
{
if (instance == null) {
instance = this;
} else if (instance != this) {
Destroy(gameObject);
}
DontDestroyOnLoad (gameObject);
}

public void PlaySingle(AudioClip clip)
{
efxSource.clip = clip;
efxSource.Play ();
}

public void RandomizeSfx(params AudioClip[] clips)
{
int randomIndex = Random.Range (0, clips.Length);
float randomPitch = Random.Range (lowPitchRange, highPitchRange);

efxSource.pitch = randomPitch;
efxSource.clip = clips [randomIndex];
efxSource.Play ();
}

// Use this for initialization
void Start () {

}

// Update is called once per frame
void Update () {

}
}

Wall.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
using UnityEngine;
using System.Collections;

public class Wall : MonoBehaviour {

public Sprite dmgSprite;

public int hp = 4;

private SpriteRenderer spriteRenderer;

public AudioClip wallChop1;
public AudioClip wallChop2;

void Awake()
{
spriteRenderer = GetComponent<SpriteRenderer> ();
}

public void DamageWall(int loss)
{
spriteRenderer.sprite = dmgSprite;
hp -= loss;
SoundManager.instance.RandomizeSfx (wallChop1, wallChop2);
if (hp <= 0) {
gameObject.SetActive(false);
}
}
}

Loder.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
using UnityEngine;
using System.Collections;

public class Loader : MonoBehaviour {

public GameObject gameManager;

void Awake()
{
if (GameManager.instance == null)
{
Instantiate (gameManager);
}
}

// Update is called once per frame
void Update () {

}
}

Captures
Game Start
2DRogueLike_Game_Start
Game Play
2DRogueLike_Game_Play
lose Game
2DRogueLike_Game_Lose

Particle System

依然从What,Why,How这三个点来学习理解Particle System,最后在学习理解的基础上,来理解如何抽象Particle System在AB资源中的加载管理。

What

Particles are small, simple images or meshes that are displayed and moved in great numbers by a particle system.(粒子系统控制成千上万的粒子(粒子是很小很简单的image或者meshes组成)显示与移动)

上一张图来看下Particle System的组成:
ParticleSystemExambple
可以看到Particle System由很多部分控制组成,这里我直接关注最后一个Module,Renderer Module,因为这是影响粒子效果最终显示结果最重要的模块。

Renderer Module组成:
ParticleSystemRendererModule
可以看到Renderer Module控制了Particle System底层用到的Material,以及显示模式等。如果我们要使用自己的Material以及Shader去实现特定的粒子效果,那么要修改的地方正是Renderer Module。

Note:
大部分粒子效果都是通过Billboard来展示的(e.g. 烟,火,雪,雾等),因为这些对于玩家视觉效果而言通过billboard展示和纯3D模拟展示区别并不大,但billboard能降低CPU和GPU性能开销。

Why

There are other entities in games, however, that are fluid and intangible in nature and consequently difficult to portray using meshes or sprites. For effects like moving liquids, smoke, clouds, flames and magic spells, a different approach to graphics known as particle systems can be used to capture the inherent fluidity and energy. (粒子系统是为了实现那些Sprite以及Mesh不好模拟表现的效果,比如烟雾,云,液体等效果。粒子系统本质还是基于Image或者Mesh。)

How

在Unity里创建一个Particle System很简单,GameObject > Effects > Particle System或者直接给GameObject添加Particle System组件即可。

待续

Note:
粒子系统依然可以添加Animator去做粒子系统的帧动画属性控制。参考:Using Particle Systems in Unity – Animation bindings

Using in AB

待续

Animation System

Multiplayer Networking

这里采用HLAPI(Hight Level API)来制作简单的Multiplayer Networking Game。
我们需要通过NetworkkManager去管理网络状态。

  1. 创建一个Empty Object改名为NetworkManager,然后Add NetworkManager Component到上面。 同时添加一个NetworkManagerHUD Component用于显示对NetworkManager简单控制的UI。
    NetworkManagerAndHUD
    NetworkManagerHUD
  2. 创建我们的Player Prefab(用于代表我们的Player)
    这里简单的创建一个Capsule 和Cube GameObject简单制作一个Player Prefab
    添加NetworkIdentity到Player上用于表示Player,并勾上Local Player Autority。
    然后创建prefab并保存。
    NetworkIdentity
    The NetworkIdentity identifies objects across the network, between server and clients. The NetworkIdentity is used to synchronize information in the object with the network.
    Player Object They represent the player on the server and so have the ability to run commands (which are secure client-to-server remote procedure calls) from the player’s client. In this server authoritative system, other non-player server side objects do not have the capability to receive commands directly from objects on clients.
    可以看出NetworkIdentity用于在Server和Clients之间标识对象,同步信息。这样一来Player就具有了从client在server上远程执行commands的能力,就可以通过Client来控制Player Object了。
  3. Registering The Player Prefab
    在Client控制Player Object之前,我们需要去注册该Player Prefab到Network Manager,然后Network Manager会去负责在Server和Client端Spawn Object。
    把Player Prefab设置到Network Manager的Player Prefab参数上。
    RegisterPlayerPrefab
    Note:
    Only the server should create instances of objects which have NetworkIdentity as otherwise they will not be properly connected to the system.(只有server可以创建含NetworkIdentity的实例对象,否则无法正常连接到server system)
  4. Creating Player Movement(Single Player)
    在通过server远程执行commands控制Player移动之前,我们先编写简单的控制逻辑用于本地的移动。
    挂载PlayerController.cs到Player Prefab上。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
using UnityEngine;
using System.Collections;

public class PlayerController : MonoBehaviour {

public float mRotateSpeed = 150.0f;

public float mMoveSpeed = 3.0f;

void Update () {
var x = Input.GetAxis("Horizontal") * Time.deltaTime * mRotateSpeed;
var z = Input.GetAxis("Vertical") * Time.deltaTime * mMoveSpeed;

transform.Rotate(0, x, 0);
transform.Translate(0, 0, z);
}
}
  1. Testing Player Movement Online
    支持Player Prefab单机控制后,让我们看看如何实现通过Online控制。
    首先通过NetworkManagerHUD去把本机作为Host,点击LAN Host(H)
    StartHost
    开启host后NetworkManager会自动根据设定的Player Prefab去创建一个。
    为了测试我们需要运行两个游戏程序,一个作为Host,一个作为Client。
    先打包发布PC办。
    运行PC版作为Host。(点击LAN Hosts(H))
    然后运行Editor版作为Client进行连接。(点击LAN Client)
    TestOnlineMove
    在Server一侧出现了两个Player Prefab创建的对象,但在Server一侧控制的时候会控制两个物体的移动,并且在Client一侧控制并不会影响到Server一侧。
    这是因为PlayerController脚本还没有network-aware(通过NetworkBehaviour在去获取network相关的一些信息去区分Server和Client等)。
    所以我们还需要使Client与Host通过NetworkManager进行数据同步。
  2. Networking Player Movement
    要使PlayerController脚本network-aware,我们需要使PlayerController继承至NetworkBehaviour(所有需要networking features(receive various callback, automatically synchronize state from server-to-client……)的对象都应该继承至NetworkBehaviour)
    The LocalPlayer is the player GameObject “owned” by the local Client. This ownership is set by the NetworkManager when a Client connects to the Server and new player GameObjects are created from the player prefab. When a Client connects to the Server, the instance created on the local client is marked as the LocalPlayer.
    可以看出当Client连接到Server的时候,NetworkManager会把新生成的Player GameObject标记为Client的LocalPlayer,然后通过isLocalPlayer去判断是否是本地控制的对象,这样一来解决了同时控制了其他Player物体的问题。
    但是Client和Server之间Player数据的同步还没有解决。
    在Player Prefab上我们需要挂载NetworkTransform component,NetworkTransform 回去负责GameObject的trasnform同步。
  3. Testing Multiplayer Movement
    再次发布PC版,运行PC版作为Server,Editor版作为Client测试。
    这遇到个错误”Spawn scene object not found for 1
    UnityEngine.Networking.NetworkIdentity:UNetStaticUpdate()”
    根据这里的讨论重新制作Prefab和挂载居然能解决问题,感觉是Unity的bug。
    MultiplayerNetworkingMovement
    NetworkTransform的一些设定可以控制Network数据同步设定。
  4. Identifying The Local Player
    为了显示区分Client和Server的Player对象,我们通过利用NetworkBehaviour的接口方法去修改Local Player的颜色信息。
    OnStartLocalPlayer – “Called when the local player object has been set up.”
    1
    2
    3
    4
    5
    public override void OnStartLocalPlayer()
    {
    base.OnStartLocalPlayer();
    GetComponent<MeshRenderer>().material.color = Color.blue;
    }
    ChangeLocalPlayerColor
    可以看到OnStartLocalPlayer是针对于Local Player而言的,所以只有Local Player看到自己控制的对象颜色变了,在另一个Client端因为并没有同步material信息,所以看到的依然是默认Spawn的白色。(NetworkBehaviour里还有很多类似的回调方法会在Server和Client创建GameObject的时候调用)
  5. Shooting(Single Player)
    为了增加网络交互,这里我们先测试单机版的shooting功能。
    创建并调整Sphere作为Bullet Prefab。同时修改Player添加Cylinder作为Gun,并设置BulletSpawn作为子弹发射点。
    制作完如下:
    PlayerWithFunAndBullet
    增加PlayerController.cs的射击功能。
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    void Update()
    {
    ......

    if(Input.GetKeyDown(KeyCode.Space))
    {
    Fire();
    }
    }

    void Fire()
    {
    //Create the bullet from the prefab
    GameObject bullet = (GameObject)Instantiate(mBulletPrefab, mBulletSpawn.position, mBulletSpawn.rotation);

    //Add velocity to the bullet
    bullet.GetComponent<Rigidbody>().velocity = bullet.transform.forward * mBulletSpeed;

    //Destroy the bullet after 2 seconds
    Destroy(bullet, 2.0f);
    }
    然后发布联机测试:
    MultiplayerNetworkingWithBullet
    从上面可以看到Bullet并没有同步显示到Server端,这是因为我们的Bullet并没有挂载NetworkIdentity脚本用于标识Server和Client对象,不具备在client在server之间信息同步的能力。并且也没有挂载NetworkTransform去同步位置数据。
  6. Adding Multiplayer Shooting
    那么通过上面的分析,要想Bullet具备网络数据同步功能,我们需要做如下几件事:
    1. Add NetworkIdentity到Bullet上(用于标识Server和Client对象,使其具备在server上执行commands能力)
    2. Add NetworkTransform到Bullet上(用于同步位置信息)
    3. 设置Network Send Rate to 0(因为Bullet不会改变方向和速度,是通过物理驱动,所以不用同步位置信息每个Client也能计算出来)
    4. 添加Bullet Prefab作为Network Manager Spawnable的对象
      SpawnableBullet
    5. 修改PlayerController,通过CMD去Spawn Bullet
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
using UnityEngine;
using System.Collections;
using UnityEngine.Networking;

public class PlayerController : NetworkBehaviour {

public float mRotateSpeed = 150.0f;

public float mMoveSpeed = 3.0f;

public GameObject mBulletPrefab;

public Transform mBulletSpawn;

public float mBulletSpeed = 6.0f;

void Update () {
if(!isLocalPlayer)
{
return;
}

var x = Input.GetAxis("Horizontal") * Time.deltaTime * mRotateSpeed;
var z = Input.GetAxis("Vertical") * Time.deltaTime * mMoveSpeed;

transform.Rotate(0, x, 0);
transform.Translate(0, 0, z);

//
if(Input.GetKeyDown(KeyCode.Space))
{
CmdFire();
}
}

[Command]
void CmdFire()
{
//Create the bullet from the prefab
GameObject bullet = (GameObject)Instantiate(mBulletPrefab, mBulletSpawn.position, mBulletSpawn.rotation);

//Add velocity to the bullet
bullet.GetComponent<Rigidbody>().velocity = bullet.transform.forward * mBulletSpeed;

//Spawn the bullet on the clients
NetworkServer.Spawn(bullet);

//Destroy the bullet after 2 seconds
Destroy(bullet, 2.0f);
}

public override void OnStartLocalPlayer()
{
base.OnStartLocalPlayer();
GetComponent<MeshRenderer>().material.color = Color.blue;
}
}
![MultiplayerNetworkingWithBulletSyn](/img/Unity/MultiplayerNetworkingWithBulletSyn.PNG)

为了使Bullet真正在多个Client之间同步,下面需要理解几个概念:
1. Remote Actions
先来看看Remote Actions的调用框架:
RemoteActions
Remote Actions分为:
Commands - which are called from the client and run on the server(client调用,server执行)
ClientRpc calls - which are called on the server and run on clients.(server调用,client执行)
这里我们看一下Commands,前面我们提到过通过isLocalPlayer区分Local Player的控制等操作。
除了通过isLocalPlayer我们还是通过Command attribute来实现。(NetworkManager在Server端创建的包含NetworkIdentity的Player Object可以通过commands的形式从client端调用server端方法)
The [Command] attribute indicates that the following function will be called by the Client, but will be run on the Server.
When making a networked command, the function name must begin with “Cmd”.
Command方法必须以Cmd开头。
Commands are sent from player objects on the client to player objects on the server. Commands can only be sent from YOUR player object, so you cannot control the objects of other players.
Commands只能从Local Player Object发送到Server端Player对应的Obejct,所以不能控制其他Player Obejct,这也就是为什么不用区分isLocalPlayer也能确保在Client端只影响Local Player Object的原因(但在Server端需要区分,不然Host点击Space会导致所有的Client都发射子弹)
2. Object Spawning
In the Multiplayer Networking HLAPI “Spawn” means more than just “Instantiate”. It means to create a GameObject on the Server and on all of the Clients connected to the Server. The GameObject will then be managed by the spawning system; state updates are sent to Clients when the object changes on the Server.
可以看出Multiplayer Networking的Object Spawn是针对Server和所有Clients而言的,需要在Server上创建该GameObject然后同步到所有连接的Clients上,Server管理了所有的需要同步的状态信息。
比如有2个Player,那么创建结构如下:
PlayerSpawning
Object和Player Objects Creation的完整流程参考官网
11. Player Health(Single Player)
增加Bullet的碰撞逻辑,编写Bullet Script。(这里遇到了Bullet脚本无效,重新制作Bullet prefab后又可以了)
Bullet.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
using UnityEngine;
using System.Collections;

public class Bullet : MonoBehaviour {

public int mDamage = 10;

public void OnCollisionEnter(Collision collision)
{
GameObject hit = collision.gameObject;
Health health = hit.GetComponent<Health>();
if(health != null)
{
health.TakeDamage(mDamage);
}
Destroy(gameObject);
}
}
接下来Health逻辑代码。
Health.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
using UnityEngine;
using System.Collections;
using UnityEngine.UI;

public class Health : MonoBehaviour {

public const int mMaxHealth = 100;

public int mCurrentHealth = mMaxHealth;

public RectTransform mHealthBar;

public void TakeDamage(int amount)
{
mCurrentHealth -= amount;
if(mCurrentHealth <= 0)
{
mCurrentHealth = 0;
Debug.Log("Dead!");
}

mHealthBar.sizeDelta = new Vector2(mCurrentHealth, mHealthBar.sizeDelta.y);
}
}
血条可视化HealthBar制作。
用3D UI的Image来制作血条。
![PlayerWithHealthBarHierachycal](/img/Unity/PlayerWithHealthBarHierachycal.PNG)
动态修改HealthBar的Forground的Rect来显示当前血量。
![PlayerGameObjectWithHealthBar](/img/Unity/PlayerGameObjectWithHealthBar.PNG)
![HealthScriptInEditor](/img/Unity/HealthScriptInEditor.PNG)
添加Billboard脚本(挂载到HealthBarCanva上),确保血条始终面向Camera。
Billboard.cs
1
2
3
4
5
6
7
8
9
using UnityEngine;
using System.Collections;

public class Billboard : MonoBehaviour {

void Update () {
transform.LookAt(Camera.main.transform);
}
}
测试效果:
![MultiplayerNetworkingWithHealthBar](/img/Unity/MultiplayerNetworkingWithHealthBar.PNG)
可以看到血条更新了但是,血条在Server和Client端并不一致,这是因为Bullet和Health脚本是工作在Local的并没有通过网络同步数据。
  1. Networking Player Health
    Changes to the player’s current health should only be applied on the Server. These changes are then synchronized on the Clients. This is called Server Authority.
    改变Player血量应该是在Server端修改,然后再同步到Client端。(这叫做Server Authority)
    先了解一个概念State Synchronization
    State Synchronization is done from the Server to Remote Clients.
    还记得之前讲到的Remote Action里的ClientRpc calls吗?
    Which are called on the server and run on clients.(server调用,client执行)
    ClientRpc calls用于同步Server端控制的数据。(Commands用于同步Client端控制的数据,比如前面我们用Commands同步子弹发射。)
    [SyncVars]标记的成员变量就是用于把server端控制的数据同步到client端
    SyncLists are like SyncVars but they are lists of values instead of individual values.SyncLists do not require the SyncVar attributes.(SyncLists相当于List < SyncVars > ,SyncLists不需要[SyncVar]关键词)
    我们可以通过重写NetworkBehaviour的OnSerialize和OnDeSerialize函数去自定义序列化行为。
    Serialization Flow on server and client详情参考
    这里需要用到SyncVars来标记我们的mCurrentHealth值并且通过isServer来使TakeDamage只在Server上起作用(因为是作为Server端控制的数据):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
......

public class Health : NetworkBehaviour {

public const int mMaxHealth = 100;

[SyncVar]
public int mCurrentHealth = mMaxHealth;

public RectTransform mHealthBar;

public void TakeDamage(int amount)
{
if(!isServer)
{
return;
}

......
}
}
![SyncHealthBarOnlyOnServer](/img/Unity/SyncHealthBarOnlyOnServer.PNG)
但从上面可以看出只有Server端的血条更新,虽然Client端的值显示变化了但是血条UI却没有变化,这是因为我们指同步了mCurrentHealth数据但没有同步HealthBar Foreground的Rect。
这里需要介绍SyncVar hook. [SyncVar hooks will link a function to the SyncVar. These functions are invoked on the Server and all Clients when the value of the SyncVar changes.](https://unity3d.com/cn/learn/tutorials/topics/multiplayer-networking/networking-player-health?playlist=29690)当SyncVa变化的时候SyncVar Hooks关联的方法会在Server和所有Clients里调用。(用于更新Server和Client的一些相关数据,这里我们是为了更新HealthBar Foreground的Rect)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
......

public class Health : NetworkBehaviour {

public const int mMaxHealth = 100;

[SyncVar(hook = "OnChangeHealth")]
public int mCurrentHealth = mMaxHealth;

public RectTransform mHealthBar;

public void TakeDamage(int amount)
{
......
}

void OnChangeHealth(int currenthealth)
{
mHealthBar.sizeDelta = new Vector2(currenthealth, mHealthBar.sizeDelta.y);
}
}
再次测试效果:
![SyncHealthBarAndValue](/img/Unity/SyncHealthBarAndValue.PNG)
终于成功的同步了mCurrentHealth数据和HealthBar Forground的Rect数据。
  1. Death And Respawning
    ClientRpc在这里正式出场(用于同步Server端控制的数据)。
    ClientRpc calls can be sent from any spawned object on the Server with a NetworkIdentity. Even though this function is called on the Server, it will be executed on the Clients.
    ClientRpc修饰的方法在Server端调用,但会在Client端执行。([ClientRpc]修饰的方法需要加Rpc前缀)
    这里我们为了使Player在血量为零后Respawn,我们需要添加[ClientRpc]修饰的Respawn方法到Health脚本中。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
......

public class Health : NetworkBehaviour {

public const int mMaxHealth = 100;

[SyncVar(hook = "OnChangeHealth")]
public int mCurrentHealth = mMaxHealth;

public RectTransform mHealthBar;

public void TakeDamage(int amount)
{
if(!isServer)
{
return;
}

mCurrentHealth -= amount;
if(mCurrentHealth <= 0)
{
mCurrentHealth = mMaxHealth;

RpcRespawn();

Debug.Log("Dead!");
}
}

void OnChangeHealth(int currenthealth)
{
mHealthBar.sizeDelta = new Vector2(currenthealth, mHealthBar.sizeDelta.y);
}

[ClientRpc]
void RpcRespawn()
{
//这里不知道为什么要加这个判断,按理只有对应的Client才会调用此方法
if(isLocalPlayer)
{
//move back to zero location
transform.position = Vector3.zero;
}
}
}
测试效果:
![RespawnPlayer](/img/Unity/RespawnPlayer.PNG)
可以看到我们成功把生命值归零的Client重置到了原点处。
  1. Handling Non-Player Obejcts
    Enemy属于non-player,属于Server端控制的对象。
    所以在设置NetworkIdentity的时候,勾选Server Only(By setting Server Only to true, this prevents the Enemy Spawner from being sent to the Clients.)
    ServerOnly
    添加Enemy Spawn的功能。
    EnemySpawner.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
using UnityEngine;
using System.Collections;
using UnityEngine.Networking;

public class EnemySpawner : NetworkBehaviour{

public GameObject mEnemyPrefab;

public int mNumberOfEnemies;

public override void OnStartServer()
{
base.OnStartServer();

for(int i = 0; i < mNumberOfEnemies; i++)
{
Vector3 spawnposition = new Vector3(
Random.Range(-8.0f, 8.0f),
0.0f,
Random.Range(-8.0f, 8.0f));

Quaternion spawnrotation = Quaternion.Euler(
0.0f,
Random.Range(0, 180),
0.0f);

GameObject enemy = (GameObject)Instantiate(mEnemyPrefab,spawnposition, spawnrotation);
NetworkServer.Spawn(enemy);
}
}
}
[OnStartServer is called on the Server when the Server starts listening to the Network](https://unity3d.com/cn/learn/tutorials/topics/multiplayer-networking/handling-non-player-objects?playlist=29690)
OnStartServer在Server端启动的时候调用,这里用来初始化敌人。
在Player Prefab基础上制作EnemyPrefab。
![EnemyPrefab](/img/Unity/EnemyPrefab.PNG)
添加EnemyPrefab到NetworkManager的Spawnable List里。
![EnemySpawnableList](/img/Unity/EnemySpawnableList.PNG)
设置EnemySpawner。
![EnemySpawner](/img/Unity/EnemySpawner.PNG)
测试效果:
![MultiplayerNetworkingWithEnemy](/img/Unity/MultiplayerNetworkingWithEnemy.PNG)
成功创建了Server管理的Enemy。
  1. Destroying Enemies
    因为Enemy Prefab采用的是Player的Health设定,所以生命值归零时会重置到原点,这并非我们想要的,我们需要Enemy被打死后被摧毁掉。
    这里需要修改Health脚本去做判断。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
......

public class Health : NetworkBehaviour {

......

public void TakeDamage(int amount)
{
if(!isServer)
{
return;
}

mCurrentHealth -= amount;
if(mCurrentHealth <= 0)
{
if (mDestroyOnDeath)
{
Destroy(gameObject);
}
else
{
mCurrentHealth = mMaxHealth;

RpcRespawn();
}
Debug.Log("Dead!");
}
}

.......
}
在Enemy Prefab上勾选mDestroyOnDeath。
再次测试:
![DestroyEnemyWhenEnemyDead](/img/Unity/DestroyEnemyWhenEnemyDead.PNG)
成功摧毁Enemy。
  1. Spawning And Respawning
    随机Spawn到不同的位置。
    The NetworkStartPosition component可以用于spawn object到不同的位置。
    在场景里创建Gameobject添加NetworkStartPosition并设定位置。
    NetworkManager会自动去找到这些带有NetworkStartPosition component的Gameobject,把他们的位置作为Start Position的选项。(Round Robin Player Spawn Method on the Network Manager)
    Spawn和Respawn等内容详情参考Spawning and Respawning
    Multiplayer Networking教程学习终于结束了。但初次接触Unity Networking的学习,还有很多地方理解错误的地方,欢迎纠正。

编辑器界面

  1. Hierachy
    游戏组成元素(这个tutorial里,好比小球,方块,地面,摄像机,灯光,墙等)
    主要用于对物件的归类管理

    1. Create Empty可以用于层次管理归类
  2. Project
    游戏原件管理(这个tutorial里,好比材质,C#脚本)
    主要用于对一次性资源的归类管理

    1. 可通过创建Folder进行资源整理归类
  3. Scene
    编辑场景

    1. 右上角可以切换视角
    2. 可以切换Local和Global模式进行移动物体
  4. Game
    运行时场景(可动态编辑查看物体属性)

  5. Inspector
    对象属性查看

    1. 通过物体名字左边的Active勾选框可以决定物体是否在编辑器可见可选
    2. 可以通过点击界面的?来打开相应面板的介绍页面

工具

MonoDevelop

C#,Js脚本编辑器(也可自己设定编辑器为VS Edit->Preference->External Tools->External Script Editor)

ILSpy

ILSpy is the open-source .NET assembly browser and decompiler.
可以用于反编译一些Unity项目学习源代码

ildasm(安装VS自带的工具)

IL反编译工具
可以通过这个工具查看编译生成的IL中间代码

Blender

Blender is a professional free and open-source 3D computer graphics software product used for creating animated films, visual effects, art, 3D printed models, interactive 3D applications and video games.
主要用于制作一些3D模型和动画然后到处FBX用于Unity

VSTU

工具原名UnityVS,由于微软收购了SyntaxTree(制作UnityVS的公司)的公司,微软将UnityVS置入了VS开发套件中。
VSTU(Visual Studio Tools For Unity)
使Visual Studio支持Unity开发。
好处:

  1. 构建多平台游戏
  2. 在Visual Studio中调试
  3. 在Visual Studio创建Unity脚本
  4. 使用Visual Studio提高工作效率(e.g. 智能提示,高亮,快速查询Unity API,快速查询或插入Unity方法等)
  5. 免费获取开发Unity所需的全部内容
    既然能让我们继续使用熟悉的VS作为开发IDE,那么让我们看看怎么集成安装吧。
    Unity version 4.0.0 or higher; Unity version 5.2.0 or higher to take advantage of built-in support for Visual Studio Tools for Unity version 2.1 or higher.
    注意Unity 5.2.0及以上版本已经内置支持VSTU了。
    这里讲讲老版本需要如何安装继承VSTU:
  6. 下载对应版本的VSTU
  7. 导入VSTU到Unity(Assets -> Import Package -> Visual Studio 20** Tools)
  8. 设置Unity debug开发环境(File -> Building Setting 下勾选Development Build和Script Debugging)
  9. 设定VS 20作为默认IDE(Edit -> Preference -> External Tools -> External Script Editor设置成VS 20)
  10. 支持调试managed dll

这里使用VSTU有几个快捷和帮助快速开发的小技巧:

  1. Ctrl+Shift+M(显示可定义的Monobehavior方法,并帮助自动生成方法定义)
  2. Ctrl+Shift+Q(熟悉了Unity API后,这个可以快速检索方法并生成方法定义)
  3. Alt+Shift+E(查看Unity项目目录结构文件)
  4. F5快速调试Unity Code(首先需要设定Unity debug环境(前面提到过),然后需要通过Debug -> Attach Unity Debugger(Attach到Unity进程上,最后运行Unity即可))
  5. Unity的错误,警告等信息显示在VS的error list里

相关概念学习

Unity Engine

参考文章

C Sharp

Unity支持C#的作为编程语言,这里不得不了解下C#的历史。
C#是微软推出的一种基于.NET框架的、面向对象的高级编程语言。C#的发音为“C sharp”,模仿音乐上的音名“C♯”(C调升),是C语言的升级的意思。其正确写法应和音名一样为“C♯”[来源请求],但大多数情况下“♯”符号被井号“#”所混用;两者差别是:“♯”的笔画是上下偏斜的,而“#”的笔画是左右偏斜。C♯由C语言和C++派生而来,继承了其强大的性能,同时又以.NET框架类库作为基础,拥有类似Visual Basic的快速开发能力。C#由安德斯·海尔斯伯格主持开发,微软在2000年发布了这种语言。
相关C#学习

Mono

C#虽好,但是只能在Windows上运行,微软那时候也没有将其开源,所以总是会有人说不能跨平台,光就这点,C#和Java就不能比呀。微软公司已经向ECMA申请将C#作为一种标准。在2001年12月,ECMA发布了ECMA-334 C#语言规范。C#在2003年成为一个ISO标准(ISO/IEC 23270)。这意味着只要你遵守CLI(Common Language Infrastructure),第三方可以将任何一种语言实现到.Net平台之上。Mono就是在这种环境下诞生的。Mono是一个由Xamarin公司(先前是Novell,最早为Ximian)所主持的自由开放源代码项目。该项目的目标是创建一系列符合ECMA标准(Ecma-334和Ecma-335)的.NET工具,包括C#编译器和通用语言架构。与微软的.NET Framework(共通语言运行平台)不同,Mono项目不仅可以运行于Windows系统上,还可以运行于Linux,FreeBSD,Unix,OS X和Solaris,甚至一些游戏平台,例如:Playstation 3,Wii或XBox 360之上。Mono使得C#这门语言有了很好的跨平台能力。相对于微软的.Net Framework运行时库Mono使用自己的Mono VM作为运行时库。

——————–2018/04/22————————————-
突然找到一篇讲述.Net,Mono以及Unity之间关系的好文,里面详细梳理了.Net,C#,Mono和Unity之前的关系和概念,结合.Net Framework相关概念可以加深理解,这里给出这篇好文的链接:扒一扒.net、.net framework、mono和Unity
——————–2018/04/22————————————-

IL2CPP

既然Mono这么好,那么为什么还需要IL2CPP了?

Why do we need IL2CPP?

  1. C# runtime performance still lags behind C/C++(C#运行效率没有C/C++好)
  2. Latest and greatest .NET language and runtime features are not supported in Unity’s current version of Mono.(新版本的.Net语言和运行时特性没有被当前的Unity版本支持)
  3. With around 23 platforms and architecture permutations, a large amount of effort is required for porting, maintaining, and offering feature and quality parity.(Mono VM在跨平台的维护上很费时费力)
  4. Garbage collection can cause pauses while running(Mono VM现有的GC很容易使得游戏卡顿)

IL2CPP Components

  1. AOT(Ahead of Time) compiler
    Ahead-of-time (AOT) compilation is the act of compiling a high-level programming language such as C or C++, or an intermediate language such as Java bytecode or .NET Common Intermediate Language (CIL) code, into a native (system-dependent) machine code with the intention of executing the resulting binary file natively.(预编译IL到系统相关的机器代码。 C# -> IL -> C++ -> Machine code)
  2. IL2CPP Virtual Machine
    Provide additional services (like a GC, metadata, platform specific resources)(提供运行时的一些功能比如GC,访问平台相关资源等)

Mono and IL2CPP compile and execution process

我们来看看使用Mono和使用IL2CPP时的脚本编译运行过程:
下面两张图来源
Mono:
Mono

IL2CPP:
IL2CPP
从上图可以看出编译成IL后,会被IL2CPP再次编译成C++,然后通过Native的C++编译器编译C++代码到相关平台的汇编代码,最后运行在IL2CPP VM里。
这样做的好处官网提到了下几点:

  1. Performance(效率上的优化),原因如下
    . C++ compilers and linkers provide a vast array of advanced optimisations previously unavailable.
    . Static analysis is performed on your code for optimisation of both size and speed.
    . Unity-focused optimisations to the scripting runtime.

  2. All code generation is done to C++ rather than architecture specific machine code. The cost of porting and maintenance of architecture specific code generation is now more amortised. (因为现在是通过利用现有的C++编译器编译C++而没有直接编译成特定架构的机器代码,这样一来跨平台的移植和维护责任就更分散了)

  3. Feature development and bug fixing proceed much faster. For us, days of mucking in architecture specific files are replaced by minutes of changing C++. Features and bug fixes are immediately available for all platforms. (功能开发和bug修改更容易快捷。通过修改IL2CPP的C++生成就能快速的针对多个平台有效)

  4. IL2CPP is not tied to any one specific garbage collector, instead interacting with a pluggable API(GC导致游戏卡顿的现象也可以通过不同的GC方式来改善)

Note:
IL2CPP支持了arm64机器架构(Mono不支持, Apple新出的设备大多基于arm64)

AOT & JIT

之前有讲到过AOT和JIT,那么这里为什么还要再次提他了?
主要原因是在IOS平台开发中,Mono .NET只支持AOT(Ahead of Time预编译生成所有对应机器代码),而不是通过JIT在运行时去编译成对应的机器代码。在IOS上采用full-AOT模式。
在这一限制下,我们必须明白什么是AOT什么是JIT,并且要知道哪些特性会使用到JIT导致IOS不支持。

AOT

AOT详细介绍
AOT大概意思就是预编译所有的IL到系统相关的机器代码。 C# -> IL -> C++ -> Machine code

JIT

JIT:
C# Study中有讲到JIT是在执行Assembly Code(也就是C#编译的CIL中间程序)运行时编译成本地机器代码。
Full-AOT模式可以看出Mono .NET在IOS上不支持动态生成代码。
那么具体哪些特性不被IOS支持,哪些用法会触发动态生成代码了?

IOS AOT Limitations

IOS AOT limitations

  1. Profiler
  2. Reflection.Emit
  3. Reflection.Emit.Save functionality
  4. COM bindings
  5. The JIT engine
  6. Metadata verifier (since there is no JIT)

在这里我从官网提取几个典型问题来说明:

  1. P/Invokes in Generic Types
    P/Invokes in generic classes aren’t supported:
1
2
3
4
class GenericType<T> {
[DllImport ("System")]
public static extern int getpid ();
}
不支持对泛型类的P/Invoke
  1. Value types as Dictionary Keys
    值类型作为Dictionary的Key时会有问题,实际上实现了IEquatable的类型都会有此问题,因为Dictionary的默认构造函数会使用EqualityComparer.Default作为比较器,而对于实现了IEquatable的类型,EqualityComparer.Default要通过反射来实例化一个实现了IEqualityComparer的类(可以参考EqualityComparer的实现)。 解决方案是自己实现一个IEqualityComparer,然后使用Dictionary<TKey, TValue>(IEqualityComparer)构造器创建Dictionary实例。
    讲到如果T实现了IEquatable,那么就会反射实例化一个实现了IEqualityCompareer的类,这里通过查看EqaulityCompare源码可以看到确实如此。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
public abstract class EqualityComparer<t> : IEqualityComparer, IEqualityComparer<t> 
{
static EqualityComparer<t> defaultComparer;

public static EqualityComparer<t> Default {
[System.Security.SecuritySafeCritical] // auto-generated
#if !FEATURE_CORECLR
[TargetedPatchingOptOut("Performance critical to inline across NGen image boundaries")]
#endif
get {
Contract.Ensures(Contract.Result<equalitycomparer<t>>() != null);

EqualityComparer<t> comparer = defaultComparer;
if (comparer == null) {
comparer = CreateComparer();
defaultComparer = comparer;
}
return comparer;
}
}

[System.Security.SecuritySafeCritical] // auto-generated
private static EqualityComparer<t> CreateComparer() {
Contract.Ensures(Contract.Result<equalitycomparer<t>>() != null);

RuntimeType t = (RuntimeType)typeof(T);
// Specialize type byte for performance reasons
if (t == typeof(byte)) {
return (EqualityComparer<t>)(object)(new ByteEqualityComparer());
}
// If T implements IEquatable<t> return a GenericEqualityComparer<t>
if (typeof(IEquatable<t>).IsAssignableFrom(t)) {
//return (EqualityComparer<t>)Activator.CreateInstance(typeof(GenericEqualityComparer<>).MakeGenericType(t));
return (EqualityComparer<t>)RuntimeTypeHandle.CreateInstanceForAnotherGenericParameter((RuntimeType)typeof(GenericEqualityComparer<int>), t);
}
// If T is a Nullable<u> where U implements IEquatable<u> return a NullableEqualityComparer<u>
if (t.IsGenericType && t.GetGenericTypeDefinition() == typeof(Nullable<>)) {
RuntimeType u = (RuntimeType)t.GetGenericArguments()[0];
if (typeof(IEquatable<>).MakeGenericType(u).IsAssignableFrom(u)) {
//return (EqualityComparer<t>)Activator.CreateInstance(typeof(NullableEqualityComparer<>).MakeGenericType(u));
return (EqualityComparer<t>)RuntimeTypeHandle.CreateInstanceForAnotherGenericParameter((RuntimeType)typeof(NullableEqualityComparer<int>), u);
}
}
// If T is an int-based Enum, return an EnumEqualityComparer<t>
// If you update this check, you need to update the METHOD__JIT_HELPERS__UNSAFE_ENUM_CAST case in getILIntrinsicImplementation
if (t.IsEnum && Enum.GetUnderlyingType(t) == typeof(int))
{
return (EqualityComparer<t>)RuntimeTypeHandle.CreateInstanceForAnotherGenericParameter((RuntimeType)typeof(EnumEqualityComparer<int>), t);
}
// Otherwise return an ObjectEqualityComparer<t>
return new ObjectEqualityComparer<t>();
}

......
}
注意下面这个分支
1
2
3
4
5
// If T implements IEquatable<t> return a GenericEqualityComparer<t> 
if (typeof(IEquatable<t>).IsAssignableFrom(t)) {
//return (EqualityComparer<t>)Activator.CreateInstance(typeof(GenericEqualityComparer<>).MakeGenericType(t));
return (EqualityComparer<t>)RuntimeTypeHandle.CreateInstanceForAnotherGenericParameter((RuntimeType)typeof(GenericEqualityComparer<int>), t);
}
当我们的的T也就是我们的Dictionary<TKey,TValue>里面的TKey实现了IEquatable<T>接口的话,里面的代码实现看不太懂,大概是动态创建实现了IEqualityComparer<TKey>的类,然后实例化返回了一个作为TKey的EqualityComparer。
"This works for reference types (as the reflection+create a new type step is skipped)"
当我们传递的TKey是reference types的时候不会触发创建实现了IEqualityCompareer<TKey>的类,直接返回ObjectEqualityComparer<t>()。
解决方案:
如果我们一定要在IOS上把value type作为Dictionary的Key的话,我们需要自己实现一个实现了IEqualityCompareer<TKey>的类,然后作为TKey的EqualityComparer传递给Dictionary<TKey,TVlue>的构造函数。
比如我们要给int添加我们自己的比较方法,那么按如下方式写即可。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public class ValueTypeComparer : EqualityComparer<int>
{
public override bool Equals(int a, int b)
{
Console.WriteLine("ValueTypeComparer:Equals() called");
if (a == b)
{
return true;
}
else
{
return false;
}
}

public override int GetHashCode(int a)
{
Console.WriteLine("ValueTypeComparer:GetHashCode() called");
return a.GetHashCode();
}
}

class Program
{
static void Main(string[] args)
{
ValueTypeComparer valuetypecomparer = new ValueTypeComparer();
int vt1 = 1;
Dictionary<int, bool> mydictionary1 = new Dictionary<int, bool>(valuetypecomparer);
mydictionary1.Add(vt1, true);
mydictionary1.ContainsKey(vt1);
}
}
![ValueTypeComparerForDictionary](/img/CSharp/ValueTypeComparerForDictionary.PNG)
  1. System.Reflection.Emit
    The System.Reflection.Emit namespace contains classes that allow a compiler or tool to emit metadata and Microsoft intermediate language (MSIL) and optionally generate a PE file on disk.
    可以看出System.Reflection.Emit是用于动态生成代码,这一点明显违背了Full-AOT的原则。
    在了解Emit是如何动态生成代码之前,可以先了解一下AppDomain的概念
    AppDomain负责Assebmly的加载和隔离,代码是加载在AppDomain里执行的。
    接下来让我们看看System.Reflection.Emit是如何动态生成代码的:
    先来看下MSDN官网介绍
    The System.Reflection.Emit namespace contains classes that allow a compiler or tool to emit metadata and Microsoft intermediate language (MSIL) and optionally generate a PE file on disk.

可以看出System.Reflection.Emit包含了很多可以动态创建程序集,类,方法的类。
以下学习主要参考C#反射发出System.Reflection.Emit学习
这位博主对Emit研究的很透侧,讲解的也通熟易懂。
Emit生成代码的基本流程:

  1. 构建程序集
1
2
AssemblyName aname = new AssemblyName("DynamicgAssembly");
AssemblyBuilder ab = AppDomain.CurrentDomain.DefineDynamicAssembly(aname, AssemblyBuilderAccess.RunAndSave);
  1. 创建模块
1
ModuleBuilder mb = ab.DefineDynamicModule(aname.Name, aname.Name + ".dll");
  1. 定义类
1
TypeBuilder tb = mb.DefineType("DynamicType", TypeAttributes.Public);
  1. 定义类成员(方法,属性等)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
   // 创建方法签名
MethodBuilder methodb = tb.DefineMethod("Hello",MethodAttributes.Public);
// 定义方法实现
// 这里比较重要,可以把这里理解成代码反编译之后的函数调用操作的代码化
ILGenerator il = methodb.GetILGenerator();
//OpCodes包含所有的Microsoft Intermediate Language (MSIL) instructions
//OpCodes.Ldstr表示加载一个字符串到evaluation stack。
il.Emit(OpCodes.Ldstr, "Hello, World!");
//OpCodes.Call表示调用方法
il.Emit(OpCodes.Call, typeof(Console).GetMethod("WriteLine", new Type[] { typeof(string) }));
//OpCodes.Ret表示返回,当evaluation stack有值时会返回栈顶值。
il.Emit(OpCodes.Ret);

tb.CreateType();
  1. 创建Assembly
1
ab.Save("DynamicAssemble.dll");

EmitStudy
结合反编译我们生成的DynamicAssemble.dll可以看出,我们是通过Emit把函数定义用IL指令定义出来了。
毫无疑问这是动态生成了程序集,方法等,所以在Full-AOT面前,IOS是不支持的。
让我们来看个实际的问题:
[The game crashes with the error message “ExecutionEngineException: Attempting to JIT compile method ‘SometType`1:.ctor ()’ while running with –aot-only.](https://docs.unity3d.com/Manual/TroubleShootingIPhone.html)
这里是由于在序列化的时候使用了泛型方法导致的。
The Mono .NET implementation for iOS is based on AOT. It compiles only those generic type methods (where a value type is used as a generic parameter) which are explicitly used by other code. When such methods are used only via reflection or from native code (ie, the serialization system) then they get skipped during AOT compilation.The AOT compiler can be hinted to include code by adding a dummy method somewhere in the script code. This can refer to the missing methods and so get them compiled ahead of time.
可以看出在Full-AOT下,如果我们在反射和序列化中使用泛型方法,该方法会被AOT过滤掉,不预编译,要想使该泛型方法参与预编译,我们需要定义一个不使用的方法去显示调用该类去通知AOT去预编译该方法。
本人遇到这个问题是:
ExecutionEngineException: Attempting to JIT compile method ‘**TypeMetadata:.ctor ()’ while running with –aot-only
Why did my BinarySerialzer stop working?
根据上面的说法是BinaryFormatter里的ObjectWriter去动态生成了我们自定义的value type类导致的JIT。
通过下面代码可以使ObjectWriter采用反射去实现而非JIT生成动态类。

1
Environment.SetEnvironmentVariable("MONO_REFLECTION_SERIALIZER", "yes");

但是反射很慢,如果频繁进行序列化反序列化的话,在IOS上并不是一种好的解决办法。
根据这里的讨论,可以看出Google Protocol Buffers是一个不错的解决方案(当然我们得绕过默认使用JIT)。

Note:
IOS只支持static code, Unlike traditional Mono/.NET, code on the iPhone is statically compiled ahead of time instead of being compiled on demand by a JIT compiler(IOS只支持full aot,不支持JIT).所以有些动态特性没法被Full AOT支持。
But the entire Reflection API, including Type.GetType (“someClass”), listing methods, listing properties, fetching attributes and values works just fine.(反射依然在IOS可用)

Protocol Buffers

详情参考:Data-Config-Automation

Unity Using

  1. Layout – 排版,可用于存储我们在Unity里面的面板排版设置,通过制定layout使用特定排版
  2. Prefab – 原件,可在场景里重复利用,而且Prefab的改变可以通过点击Inspector界面的Apply影响到所有从该Prefab里创建出的对象
  3. Tag – Scene里面所有物体的唯一标识,用于确保正确识别物体,通过对物体添加tag可以在程序里用于身份判别
  4. Static collider – will not be affected by collision (not cause collide). Unity keep static collider mesh in cach
  5. Dynamic collider – will be affected by collision
  6. Is Trigger – when no collide caused(e.g. static collider), we can use trigger to enter collide event(只有没有物理碰撞的物体才会触发Trigger的)
  7. Rigibody body – Moved by using physical force, use dynamic collider
  8. Kinematic body – Moved by using the transform instead of physical force
  9. Mesh Collider – Use Model mesh as collider mesh (一般只用于简单的三角形数量少的mesh)
  10. AudioSource – 声音在Unity里也是组件的形式存在
  11. C# Script – 每个物体可以绑定多个脚本用于不同逻辑,特别是用于原件一些特有的逻辑特性
  12. 2.5D – 在3D的世界里通过平行投影(Orthographic Projection - isometric projection)实现
  13. Raycast – 射线碰撞检测(Physics.Raycast()在物体是is triiger on的时候不会触发)
  14. LayerMask – 可以用于选择和射线检测过滤
  15. NavMeshAgent – Unity的Navigation system可以提供最基本的路径AI寻址(只需要指定agent相关参数,在代码里设定跟踪目标) – 添加后需要Bake Navigation
  16. Animation Controller – 动画管理,通过给物体添加Animator并设定动画状态机之间的切换规则来实现动画状态切换管理(通过给UI添加Animator我们也可以在Anmation面板设置简单动画)
  17. LineRenderer – 可以用于在3D世界里绘制线条,实现可视化一些射线检测物体碰撞等
  18. Select icon – 可以给没有实际物体或透明物体一个颜色标记(容易看到3D世界位置)
  19. Canvas – 画布是所有UI所应该在的区域(UI的层级关系会影响渲染顺序) – 通过设定Render Mode实现不同的效果(Screen Space - Overlay不会受Camera的投影方式影响,永远在场景上. Screen Space - Camera会受Camera的投影方式影响,比如在透视投影里就会有近大远小的效果. World Space会把UI当做3D空间里的物体来对待,有深度概念会被遮挡 ))
  20. Physics2D.Linecast – 可用于检测特定layer的射线检测
  21. **Input – 不同平台要使用不同平台的AIP
    e.g.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#if UNITY_EDITOR || UNITY_STANDLONE || UNITY_WEBPLAYER
horizontal = (int)Input.GetAxisRaw ("Horizontal");
vertical = (int)Input.GetAxisRaw ("Vertical");

if (horizontal != 0) {
vertical = 0;
}
#else
if(Input.touchCount > 0)
{
Touch myTouch = Input.touches[0];
if(myTouch.phase == TouchPhase.Began)
{
touchOrigin = myTouch.position;
}
else if(myTouch.phase == TouchPhase.Ended && touchOrigin.x >= 0)
{
Vector2 touchEnd = myTouch.position;
float x = touchEnd.x - touchOrigin.x;
float y = touchEnd.y - touchOrigin.y;
touchOrigin.x = -1;
if(Mathf.Abs(x) > Mathf.Abs(y))
{
horizontal = x > 0 ? 1 : -1;
}
else
{
vertical = y > 0 ? 1 : -1;
}
}
}
#endif
  1. DontDestroyOnLoad() – 加载新的scene的时候保证对象不会被销毁
  2. Invoke() – 延时调用方法

快捷键学习

F – 在Scene界面选中对象后可以快速聚焦到该物体
Ctrl + ‘ – 打开脚本的类Reference page
Ctrl + D – Duplicated(复制)物体

项目编译发布

PC

  1. File -> Build Setting 设置平台PC
  2. Drag Scene file to Scenes to build 选择要编译的场景
  3. 点击Build 编译游戏

IOS

Native Code Compilation

PC平台使用C++编写的库的时候是通过lib或者dll的静态和动态链接
但在IOS移动平台使用的时候,这些代码必须以插件的形式调用,以静态方式链接。
Managed plugins and Native Plugins

Note:

1
2
3
4
5
6
7
8
9
10
#if UNITY_IPHONE || UNITY_XBOX360
//On iOS and Xbox 360 plugins are statically linked into
//the executable, so we have to use **Internal as the
//library name.
[DllImport ("**Internal")]
#else
// Other platforms load plugins dynamically, so pass the name
// of the plugin's dynamic library.
[DllImport ("PluginName")]
#endif

在真正编译到IOS上使用之前,我们必须通过cross compile把Native code编译成.a的文件,然后静态链接到项目中使用。(需要在Mac电脑上cross compile)

以下以开源工具Zlib为例:
Zlib Download

参考文章:Building Universal Binaries for iOS

解压打开文件夹后会发现,目录下有MakeFile, .configure, MakeFile.in等文件。MakeFile是编写了编译规则的文件用于自动编译的工具(在Unix,Linux系统上广泛运用)。而configure和MakeFile.in是通过AutoconfAutomake工具生成,用于编写高阶语言来生成makefile而无需手动编写复杂的makefie.。通过调用.configure并传入参数,.configure就会以MakeFile.in为模板产生我们预期的MakeFile。(这里我对MakeFile,autoconf,automake都不熟悉,现阶段的认识是这样的)

我们将会通过.configure生成我们需要的Makefile用于编译程序:

  1. .configure –prefix=${PWD}/installdir(用于生成makefile,–prefix用于指定make install后的文件夹)
  2. unset CC(先重置一次CC编译设定,避免出问题)
  3. export CC=”xcrun -sdk iphoneos clang -arch armv7”(这里很重要,在调用makefile之前,我们通过设置环境变量CC的值来指定make的编译设定,比如编译工具,编译架构等,-sdk 指定SDK路径用于搜索相关工具(比如编译工具等) -arch 用于指定编译出的文件架构类型)
  4. make clean(清理make生成的文件)
  5. make(执行makefile进行编译)
  6. make install(安装make生成的相关文件到对应目录)
  7. lipo -info libz.a(通过lipo工具来查看生成的libz.a文件是基于什么架构的)

上述有几个概念需要提一下:
xcrum(个人理解是,通过这个工具可以在不改makefile的前提下,通过命令行指定开发工具的一些信息) properties.
clang – clang是 是一个C、C++、Objective-C和Objective-C++编程语言的编译器前端。它采用了底层虚拟机(LLVM)作为其后端。它的目标是提供一个GNU编译器套装(GCC)的替代品。Clang是LLVM编译器工具集的前端(front-end),目的是输出代码对应的抽象语法树(Abstract Syntax Tree, AST),并将代码编译成LLVM Bitcode。接着在后端(back-end)使用LLVM编译成平台相关的机器语言 。Clang支持C、C++、Objective C。(可以理解成Mac上的编译工具,支持多种语言编译生成多种平台相关的机器语言)
Universal binary – 可以理解成multiarchitecture binary,支持多种架构的二进制文件(比如我们这里针对IOS生成的armv7架构的libz.a,我们也可以再生成一个针对simulator的i386架构的libz.a,然后通过执行lipo -create -arch i386 libz.a(386) -arch armv7 libz.a(armv7) -output libz_fatbinary.a生成同时支持simulator和IOS的.a文件)
lipo – 苹果上用于生成Universal binary的工具(lipo -info libz.a用于查看.a文件支持的架构信息)

Note:
Simulator(模拟器)使用的.a文件是基于i386的,而IOS真机使用的.a文件是基于armv7 or armv8的。

Prepare XCode Project

  1. File -> Build Setting 设置平台IOS
  2. Drag Scene file to Scenes to build 选择要编译的场景
  3. 点击Build

这样一来XCode项目就生成了。

Compile XCode Project

条件:

  1. 需要苹果开发者账号
    待续……

Note:
跨平台自动化编译工具CMake

版块学习

UGUI

在Unity 4.6版本后推出的官方的GUI

Coordinates System

下面的定义来源

  1. Screen coordinates
    Is 2D, measured in pixels and start in the lower left corner at (0,0) and go to (Screen.width, Screen.height). Screen coordinates change with the resolution of the device, and even the orientation (if you app allows it) on mobile devices.
    左下角为(0,0),右上角为(Screen.width, Screen.height)

  2. GUI coordinates
    Is used by the GUI system. They are identical to Screen coordinates except that they start at (0,0) in the upper left and go to (Screen.width, Screen.height) in the lower right.
    左上角为(0,0),右下角为(Screen.width, Screen.height)

  3. Viewport coordinates
    Is the same no matter what the resolution. The are 2D, start at (0,0) in the lower left and go to (1,1) in the upper right. For example (0.5, 0.5) in viewport coordinates will be the center of the screen no matter what resolution or orientation.
    相对于摄像机坐标系而言的,近平面(0,0),远平面(1,1),(0.5,0.5)表示在摄像机坐标系的中间。

  4. World coordinates
    Is a 3D coordinates system and where all of your object live.
    世界坐标系的(x,y,z)

Unity Unit & Pixel Per Unit

  1. Unity Unit
    Unity Unit代表Unity里的一个Unit单位代表物理大小,默认是1 unit = 1 meter
    所以我们在建模的时候如果想导入到Unity后保持scale(1,1,1)就应该把建模软件也设定成1 unit = 1 meter。
    Unity Unit主要影响的是Physical(比如重力加速度的运算,如果修改Unit为厘米,但不重新计算重力加速度,那么物体会落的很快)

  2. Pixel Per Unit
    Pixel Per Unit主要影响Sprite在屏幕上的映射显示。表示多少个像素等价于一个Unit,后面会详细讲到。

Canvas

The Canvas is the area that all UI elements should be inside.(所有的UI元素都必须处于Canvas里)
以下学习参考Unity UGUI 原理篇(二):Canvas Scaler 縮放核心
UI Render Space:
Screen Space(Overlay) – 不会受Camera的投影方式影响,永远在场景上.(不用设置Camera)
Screen Space(Camera) – 需要设置Camera,会受Camera的投影方式影响,比如在透视投影里就会有近大远小的效果,有遮挡的概念. (正交投影会根据摄像机的Orthographic Size和PPU设定去显示UI)。Screen Space下屏幕分辨率变化,Canva会自动缩放UI去适应。
World Space – 会把UI当做3D空间里的物体来对待,有深度概念会被遮挡。

Canvas Scaler:
The Canvas Scaler component is used for controlling the overall scale and pixel density of UI elements in the Canvas. This scaling affects everything under the Canvas, including font sizes and image borders.
可以看出Canvas Scaler是控制了UI在画布上的具体显示,这个好比NGUI里UIRoot下的Scaling Style设定。
UI Scale Mode有如下三种:

  1. Constant Pixel Size
    Makes UI elements retain the same size in pixels regardless of screen size.
    保持UI像素大小与屏幕大小无关(相当于NGUI里的UIRoot的PixelPerfect)
    参数Scale Factor – Scales all UI elements in the Canvas by this factor.
    通过Scale Factor去scale canvas的大小已达到整体scale UI的目的。
    我们设置屏幕大小为1024 * 768,Scale Factor为1时:
    ConstantPixelSizeScaleFactor
    可以看到Canva Size为1024 * 768,Scale为(1,1,1)
    当我们设置Scale Factor为2时:
    ConstantPixelSizeScaleFactor2
    可以看到Canva Size为512 * 384,Scale为(2,2,1)
    这样一来所有Canva下的UI就相当于放大了两倍。
    Note:
    这里注意区分Screen Size和Canva Size是两个概念。
    参数Reference Pixels Per Unit – If a sprite has this ‘Pixels Per Unit’ setting, then one pixel in the sprite will cover one unit in the UI.
    PPU指定了多少个像素表示一个Unit。如果Sprite有设置PPU,那么会把Sprite里的1 pixel转换成UI中的1 pixel。
    这里引用下源码,下面源码来源
    Image.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
public float pixelsPerUnit
{
get
{
float spritePixelsPerUnit = 100;
if (sprite)
spritePixelsPerUnit = sprite.pixelsPerUnit;

float referencePixelsPerUnit = 100;
if (canvas)
referencePixelsPerUnit = canvas.referencePixelsPerUnit;

return spritePixelsPerUnit / referencePixelsPerUnit;
}
}

public override void SetNativeSize()
{
if (overrideSprite != null)
{
float w = overrideSprite.rect.width / pixelsPerUnit;
float h = overrideSprite.rect.height / pixelsPerUnit;
rectTransform.anchorMax = rectTransform.anchorMin;
rectTransform.sizeDelta = new Vector2(w, h);
SetAllDirty();
}
}
可以看到当我们设置了Sprite的PPU后,Sprite的可显示区域(Rect)的大小计算如下:
SpriteRectSize =  SpriteSize * SpritePPU / CanvaReferencePPU
所以如果我们设置SpritePPU和CanvaReferencePPU一样,那么SpriteRect的大小就会以Sprite的原始大小为准。
Note:
SetNatieSize需要通过点击Image Component的Set Native Size触发。
  1. Scale With Screen Size
    以预设的Resolution为基准来进行自适应计算。(相当于NGUI里的Fixed Size)
    Screen Match Mode的设定则决定了如何基于高度和宽度变化去适应。
    以下是CanvasScaler.cs源码:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Vector2 screenSize = new Vector2(Screen.width, Screen.height);

float scaleFactor = 0;
switch (m_ScreenMatchMode)
{
case ScreenMatchMode.MatchWidthOrHeight:
{
// We take the log of the relative width and height before taking the average.
// Then we transform it back in the original space.
// the reason to transform in and out of logarithmic space is to have better behavior.
// If one axis has twice resolution and the other has half, it should even out if widthOrHeight value is at 0.5.
// In normal space the average would be (0.5 + 2) / 2 = 1.25
// In logarithmic space the average is (-1 + 1) / 2 = 0
float logWidth = Mathf.Log(screenSize.x / m_ReferenceResolution.x, kLogBase);
float logHeight = Mathf.Log(screenSize.y / m_ReferenceResolution.y, kLogBase);
float logWeightedAverage = Mathf.Lerp(logWidth, logHeight, m_MatchWidthOrHeight);
scaleFactor = Mathf.Pow(kLogBase, logWeightedAverage);
break;
}
case ScreenMatchMode.Expand:
{
scaleFactor = Mathf.Min(screenSize.x / m_ReferenceResolution.x, screenSize.y / m_ReferenceResolution.y);
break;
}
case ScreenMatchMode.Shrink:
{
scaleFactor = Mathf.Max(screenSize.x / m_ReferenceResolution.x, screenSize.y / m_ReferenceResolution.y);
break;
}
}
Screen Match Mode有三中:
1. MatchWidthOrHeight(根据Screen Size相对于预设Resolution的宽度和高度变化去计算ScaleFactor(缩放Canva Size))
	举例说明:
	Reference Size为1024 * 768。Screen Size为960 * 640。
    LogWidth = Mathf.Log(screenSize.x / m_ReferenceResolution.x, kLogBase) = Log2( 960 / 1024) = Log2(0.9375);
    LogHeight = Mathf.Log(screenSize.y / m_ReferenceResolution.y, kLogBase) = Log2( 640 / 768) = Log2(0.8333);
    logWeightedAverage = Mathf.Lerp(logWidth, logHeight, m_MatchWidthOrHeight) = Lerp(Log2(0.9375), Log2(0.8333), MatchWidthOrHeight));
    scaleFactor = Mathf.Pow(kLogBase, logWeightedAverage) = Pow(2, Lerp(Log2(0.9375), Log2(0.8333), MatchWidthOrHeight)));
    MatchWidthOrHeight会决定Width和Height Scale所占比例。
    如果MatchWidthOrHeight为0。
    scaleFactor = Mathf.Pow(kLogBase, logWeightedAverage) = Pow(2, Lerp(Log2(0.9375), Log2(0.8333), 0))) = Pow(2, Log2(0.9375)) = 0.9375;
    Canva Size Width = Screen Size Width / scaleFactor = 960 / 0.9375 = 1024
    Canva Size Height = Screen Size Height / scaleFactor = 640 / 0.9375 = 682.667
    为什么需要通过先取对数在进行平均混合了?
	假設Reference Resolution為400*300,Screen Size為200*600 大小關係是
	Reference Resolution Width 是 Screen Size Width的2倍
	Reference Resolution Height 是 Screen Size 的0.5倍
	看起来如下图:
	![ScaleWithScreenSize](/img/Unity/ScaleWithScreenSize.PNG)
	當March為0.5時,ScaleFactor應該要是 1 (拉平) -- ?没太理解这里拉平的概念
	ScaleFactor Width: 200/400=0.5
	ScaleFactor Height:600/300=2
	一般混合:
	ScaleFactor = March * ScaleFactor Width + (1 - March) * ScaleFactorHeight
	ScaleFactor = 0.5 * 0.5 + 0.5 * 2 = 1.25
	對數混合:
	logWidth:log2(0.5) = -1
	logHeight:log2(2) = 1
	logWeightedAverage:0
	ScaleFactor:Pow(2,0) = 1
2. Expand(将Canva Size基于宽度或高度去扩大)
	以Scale Factor Width和Scale Factor Height中小的为准。
3. Shrink(将Canva Size基于宽度或高度去收缩)
	以Scale Factor Width和Scale Factor Height中大的为准。
	scaleFactor一般混合為1.25,對數混合為1,結果很明顯,使用對數混合能更完美的修正大小
	可以看出UGUI的自适应都是基于动态计算Scale Factor(也就是Canva Size即Canva Scale的大小)来实现的。
  1. Constant Physical Size
    Makes UI elements retain the same physical size regardless of screen size and resolution.
    保持UI的Physical Size(DPI - Dots Per Inch 每英寸像素点)
    ConstantPhysicalSize
    1. Physical Unit:使用的單位种类
1
2
3
4
5
6
7
| 单位种类    |     中文       |   与1英寸关系 |
| ----------- | -------------- | ------------- |
| Centimeters | 公分(cm,厘米) | 2.54 |
| Millimeters | 毫米(mm,毫米) | 25.4 |
| Inches | 英寸 | 1 |
| Points | 点 | 72 |
| Picas |皮卡(十二点活字)| 6 |
2. Fallback Screen DPI:备用Dpi,当找不到设备Dpi時,使用此值
3. Default Sprite DPI:预设的图片Dpi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
float currentDpi = Screen.dpi;
float dpi = (currentDpi == 0 ? m_FallbackScreenDPI : currentDpi);
float targetDPI = 1;
switch (m_PhysicalUnit)
{
case Unit.Centimeters: targetDPI = 2.54f; break;
case Unit.Millimeters: targetDPI = 25.4f; break;
case Unit.Inches: targetDPI = 1; break;
case Unit.Points: targetDPI = 72; break;
case Unit.Picas: targetDPI = 6; break;
}

SetScaleFactor(dpi / targetDPI);
SetReferencePixelsPerUnit(m_ReferencePixelsPerUnit * targetDPI / m_DefaultSpriteDPI);

结论:
ScaleFactor 為 “目前硬體dpi” 佔了 “目標單位” 的比例
ReferencePixelsPerUnit 要與目前的Dpi在運算求出新的值,再傳入Canvas中求出大小,公式如下:
新的Reference Pixels Per Unit = Reference Pixels Per Unit * Physical Unit / Default Sprite DPI
UI大小 = 原圖大小(Pixels) / (Pixels Per Unit / 新的 Reference Pixels Per Unit)
(关于基于Constant Physical Size这一点还没看明白,上面资料来源)

RectTransform:
The Rect Transform is a new transform component that is used for all UI elements instead of the regular Transform component. Rect Transforms have position, rotation, and scale just like regular Transforms, but it also has a width and height, used to specify the dimensions of the rectangle.
RectTransform是用于指定UI相关的一些信息(Position,Rotation,scale,Anchors,Pivot等)
RectTransform

Pivot(枢轴):
物体的transform变化,比如scale,position,rotation都会以这个点为基准来变化

Anchors(锚点):
主要用于UI的layout排版,根据Anchors的位置设置不同,UI会在父节点RectTransform变化的时候做出适当的变化(这里的Anchors相当于完成了NGUI里UIAnchor和UIStreach的任务)。Anchors能够保证以4个锚点设定的相对位置(可以相对屏幕比例,也可以相对特定像素值)。

Layout System:
Layout System是基于RectTransform之上的系统,用自动调整一个或多个元素的大小,位置,间隔等。(用于UI布局很重要)
详细学习参考Unity UGUI 原理篇(五):Auto Layout 自動佈局

结论:
UI Render Mode决定了UI的显示方式。
UI Scale Mode决定了UI的大方向(保持像素还是保持比例还是保持物理大小)自适应策略。
Pivot决定了自适应的基准点。
Anchors决定了自适应变化的参考物(比如我们把锚点都设置在父RectTransform的左上角,那么自适应的时候无论父RectTransform大小如何变化该UI都是相对于父RectTransform的位置都不会变化(但UI大小会变,因为我们可能设置了Scale With Screen Size使得Canvas会自适应屏幕变化(Canvas Scale值会变)),如果我们把锚点设置到父RectTranform的四个角,那么UI就会根据父RectTransform的大小的变化比例去适应(一般用于背景显示,保证背景铺满屏幕,可能会拉伸))

在得出最终结论之前,让我们来看一个像素完美显示问题。
如何保证Pixel Perfect 2D?
参考文章Pixel Perfect 2D
The secret to making your pixelated game look nice is to ensure that your sprite is rendered on a nice pixel boundary. In other words, ensure that each pixel of your sprite is rendered on one screen pixel (or any other round number). The trick to achieving this result is tweaking the camera’s orthographic size (and live with the consequences).
从上面可以看出要保证Pixel Perfect显示,我们需要保证Sprite的一个像素在屏幕上对应显示像素为整数倍(最好1:1)。要做到这一点通过修改Camera Orthographic Size可以做到。正如NGUI 2.7屏幕自适应学习中提到的,我们要保证PPU = Screen.height / 2 / Orthographic Size;
因为我们不可能修改物理的Screen.height,所以为了在不同设备上完美显示像素,我们能做的是动态修改Orthographic Size或制作多套PPU Asset去动态使用。
详情参考:

1
2
3
4
5
6
7
8
Vertical Resolution  |   PPU  |	PPU Scale |	Orthographics Size |	Size Change
---------------------|--------|-----------|--------------------|----------------
768 | 32 | 1x | 12 | 100%
1080 | 32 | 1x | 16.875 | 140%
1080 | 48 | 1x | 11.25 | 93.75%
1080 | 32 | 2x | 8.4375 | 70.31%
1440 | 32 | 2x | 11.25 | 93.75%
1536 | 32 | 2x | 12 | 100%

但是由于修改Orthographic Size会导致Visible World Space变化,所以还需要根据项目实际情况做出修正。

  1. Thick Borders
    ThickBorder
    如果是2D游戏有边框限制且Orthographic Size变化不大,我们可以通过只调整边框厚度去弥补
  2. Increase Asset Resolution
    通过制作多套PPU的Asset,已达到修正Orthographics Size值(从前面表格可以看出,通过使用不同PPU Asset可以在保证Pixel Perfect的前提下尽量减小Orthographics Size的变化)的目的。(Size Change变化不大的话可以采用第一种方案按情况弥补)
  3. Halving Orthographic Size
    如果Screen.height变化很大,导致计算出的Orthographics Size变化很大,我们可以通过修正(以2倍数为基准)Orthographic Size大小使其Size Change变化不大,然后通过Thick Borders来做最后修正。(比如表格上面从768变到1440的时候,我们Orthographics Size不是22.5而是22.5/2=11.25来修正。这个方法的好处是不需要做多套PPU的Asset)

UGUI自适应的结论就不言而喻了,一般来说都是基于按比例的缩放,所以设置如下(2D为例):
UI Render Mode – Screen Space(Camera),设置Main Camera为Orthographic,Orthographic Size设置为Screen.height / 2 / PPU(使用Screen Space(Camera)为了配合Orthographic Camera对Scren Unit进行细分)
UI Scale Mode – Scale With Screen Size,设置基准Resolution,同时Screen Match Mode为MatchWidthOrHeight = 0-1(适应宽和高的比例,根据实际情况选择e.g. 横版2D或者竖版2D黑边要求不一样)。
Pivot – 不出意外都设置在UI的中心点
Anchors – 如果是和父RectTransform大小一起适应的话,放到父RectTransform的四个角即可(可能会有拉伸,多用于背景显示)。如果想保持UI相对于父RectTransform的特定比例的位置即设置锚点到特定比例位置即可。如果只想保持相对位置不变,四个锚点都设置到同一个点即可(一般UI都有固定位置,一般都设置成这个)。更多的UI排版使用Layout,Layout Group管理。
美术制作的时候以UI Scale Mode里设置的Resolution基准和PPU设定来制作(比如Resolution基准决定了背景图片的大小,PPU设定决定UI大小(PPU = Screen.height / 2 / Orthographic Size),如果设定Screen.height = 768,Orthographic Size = 12, 那么PPU = 32, 1 unit = 32 pixel,屏幕UI高度为24 Unit,如果想设计button占屏幕高度1 unit,那么32 * 32 像素即可)。
然后在游戏里通过动态计算Orthographic Size来确保Pixel Perfect显示。

1
Orthographic Size = Screen.height / 2 / PPU(32);

当Screen.height(Orthographic Size变化大)变化大的时候,我们通过按2的倍数的比例来修正Orthographic Size。(也可以通过制作多套PPU的Asset来动态替换)

EventSystem

UI要想响应UI Event,必须在场景里创建EventSystem(创建UI Canvas的时候会自动创建)。
EventSystem
可以看到现在Event System主要是由2个Component构成:

  1. Event System
    The EventSystem is responsible for processing and handling events in a Unity scene. A scene should only contain one EventSystem.
    Event System是负责处理场景里的事件,负责更新Input Module
    点击Play后,选中Event System,然后在场景里交互的时候,EventSystem会显示出当前交互的信息
    SystemInfo
    Send Navigation Events – 用于控制是否开启UI导航功能(通过按下键盘上下左右来选择)
    通过设置Button的Navigation为Explicit我们可以设置该Button的具体导航情况:
    ButtonUINavigation
  2. Standalone Input Module
    Input module for working with, mouse, keyboard, or controller.An Input Module is a component of the EventSystem that is responsible for raising events and sending them to GameObjects for handling.
    负责Input输入的控制,负责触发和发送事件到指定对象
    Event System 触发流程
    1. 使用者输入(触摸、键盘)
    2. 透过Scene中的Raycasters计算哪个元素被点中
    3. 使用Input Module,发送Event到指定对象
      Canvas上的GraphicRaycaster负责设定UI相关的raycast信息响应:
      GraphicRaycast
      同理PhycsicsRaycaster会负责物理的raycast相关的信息响应。(比如响应位于一个含Collider3D物体之上)

Note:
UI elements in the Canvas are drawn in the same order they appear in the Hierarchy.(UI成员的绘制顺序和UI在Canvas下的顺序一致)

UGUI Atlas

以下学习参考UGUI研究院之全面理解图集与使用(三)
NGUI的Atlas是通过提前制作好,但UGUI里这一概念模糊了,我们通过设定Sprite的信息:
UGUISpriteEditor
通过设置tag来决定图集tag。
然后在打包或主动使用Sprite Packer的时候会去打包图集,Sprite Packer设定Edit -> Project Setting -> Editor:
SpritePackerSetting
为了方便查看Sprite Packer打包和Draw call情况,我们采用Always Enable。
我们也可以通过打开Sprite Packer主动去查看图集的打包情况:
UGUISpritePacker
通过Profiler可以查看到当前Draw Call:
ProfilerDrawCall
关于根据图片名字动态创建Sprite的方案(通过把小图片关联到Prefab上,然后通过Resources.load来加载来实现)以及图集在线更新方案(Assetbundle)参考UGUI研究院之全面理解图集与使用(三)
Note:
因为在同一个图集里所以,纹理图片只需加载一张,只需通过UV坐标切换就能实现渲染多个Sprite,所以在同一个图集里的Sprite渲染都放在一个Draw Call里就能完成。(关于Draw Call以后的学习再深入学习)
Resources下面的Sprite不会被打包到图集里。
图集的存放位置在Assets同级目录的Library/AtlasCache下

关于Font Atlas,UGUI采用动态字体,我们直接导入.ttf等文件就能使用了。

Multiplayer Networking

The Hight Level API

Using this means you get access to commands which cover most of the common requirements for multiuser games without needing to worry about the “lower level” implementation details.
高层面的多人游戏网络传输的抽象,可以方便快速的用于制作简单的多人网络游戏。

多人网络概念:
Server and Host:
HLAPIHostAndClientRelationShip
The host is a server and a client in the same process. The host uses a special kind of client called the LocalClient, while other clients are RemoteClients. The LocalClient communicates with the (local) server through direct function calls and message queues, since it is in the same process. It actually shares the scene with the server. RemoteClients communicate with the server over a regular network connection.
Host和Client运行在一个Process里。一个Host多个Client,Host相当于本地通信,其他的Clients通过network通信。

支持实现如下功能:

  1. Control the networked state of the game using a “Network Manager”.(通过NetworkManager管理网络状态)
  2. Operate “client hosted” games, where the host is also a player client.(运行Client Host游戏)
  3. Serialize data using a general-purpose serializer.(序列化数据)
  4. Send and receive network messages.(网络数据message发送接收)
  5. Send networked commands from clients to servers.(从clients发送networked commands到servers)
  6. Make remote procedure calls (RPCs) from servers to clients.(servers到clients的远程程序调用)
  7. Send networked events from servers to clients.(从servers发送networed events到clients)

实例学习参考Multiplayer Networking

The HLAPI is a new set of networking commands built into Unity, within a new namespace: UnityEngine.Networking.
在UnityEngine.Networking下HLAPI提供了大量方便的接口方法用于开发多人游戏。
让我们看看Unity里网络编程的大体框架结构:
HLAPILayerStructureOnNetwork
更多内容待学习

Using The Transport Layer API

The Transport Layer is a thin layer working on top of the operating system’s sockets-based networking. It’s capable of sending and receiving messages represented as arrays of bytes, and offers a number of different “quality of service” options to suit different scenarios. It is focused on flexibility and performance, and exposes an API within the UnityEngine.Networking.NetworkTransport class.
基于Socket-based networking的底层接口,用于实现自定义的网络传输。

Support two protocols:

  1. UDP for generic communications
  2. WebSockets for WebGL
    更多内容待学习

Unity Shader

详情参见Unity_Shader

Excel数据读取

Unity官网给出TextAsset类用于读取下列格式的文件:
.txt, .html, .htm, .xml, .bytes, .json, .csv(逗号分隔值(Comma-Separated Values,CSV,有时也称为字符分隔值,因为分隔字符也可以不是逗号),其文件以纯文本形式存储表格数据(数字和文本)), .yaml, .fnt
从上面可以看出并不支持直接对excel(.xlsx,.xls等格式)的解析。
通过官网可以看出TextAsset是针对文本和二进制类型文件的读取,具体的解析还得自己去写。
所以并不是我想找的Excel解析的解决方案。
让我们来看看这篇文章Unity3D游戏开发之当游戏开发遇上Excel
作者对Excel解析下了不少功夫。
从上面可以看出,针对Excel解析作者找到了三中解决方案:

  1. Microsoft.Office.Interop.Excel
    基于微软提供的Office API,这组API以COM组件的形式给出,我们可以通过调用该API实现对Excel文件的解析。微软的Office API特点是使用起来方便,可以使用C#、Visual Basic等语言进行相关开发。可是这种解决方案的的缺点同样很明显,因为COM组件主要依赖于系统,因此使用COM组件需要在系统中注册,这将对代码的可移植性产生影响,而且受制于COM技术,这种解决方案只能运行在Windows平台上,无法实现跨平台,加之解析速度较慢,因此这种方案通常只适合在解析速度要求不高,运行环境为Windows平台的应用场景。
    关键词:
    不跨平台
  2. ExcelReader
    ExcelRead就是跨平台目标下解析Excel文件的首选方案。
    ExcelRead website
    从描述来看支持Windows和OS X,Linux等(注意不包含移动端IOS和Android)。但就PC端来看已经足够作为解析Excel的方式了。
    Note:
    针对移动端的时候可以采用先在PC端把数据读取出来写到单独的数据文件,然后再到移动端去读取解析。
  3. FastExcel
    FastExcel是一个在开源世界里的一个java编写的工具。FastExcel Website。官网上有简单demo方便快速集成。

接下来我选择以ExcelReader为解决方案,尝试使用ExcelReader来解析Excel。

  1. 下载Dll,官网的建议是直接下载DLL来用
    ExcelReader DLL Download
    可以看到主要有两个DLL(Excel.4.5.dll & ICSharpCode.SharpZiplib.dll)
    ExcelReaderDLL
  2. 导入DLL到Unity(Unity对Managed Plugins支持很方便,直接在Asset下创建一个目录copy进去即可)
    Note:
    这里要注意的一点,需要采用基于.net 2.0的dll而非基于.net 4.5的(我想是由于mono还不支持所有.net的特性导致的)
  3. 通过导入命名空间就可以正常使用ExcelReader库了
    以下是根据代码参考Unity3D游戏开发之当游戏开发遇上ExcelUnity3D研究院之MAC&Windows跨平台解析Excel(六十五)Excel Data Reader - Read Excel files in .NET
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
using UnityEngine;
using System.Collections;
using Excel;
using System.IO;
using System.Data;
using System;

public class GameConfigurationManager
{
public static GameConfigurationManager mLMInstance = new GameConfigurationManager();

public string ConfigurationPath
{
set
{
mConfigurationPath = value;
}
}
private string mConfigurationPath = "/Configuration/AccountPasswordAndGameSetting.xlsx";

private bool mIsConfigurationComplete = false;

private GameConfigurationManager()
{

}

public void Init()
{
if(!mIsConfigurationComplete)
{
try
{
ReadConfiguration();
mIsConfigurationComplete = true;
}
catch(Exception e)
{
mIsConfigurationComplete = false;
Debug.Log("Exception " + e.ToString());
}
}
}

private void ReadConfiguration()
{
Debug.Log("Application.dataPath = " + Application.dataPath);
Debug.Log("mConfigurationPath = " + mConfigurationPath);

FileStream stream = File.Open(Application.dataPath + mConfigurationPath, FileMode.Open, FileAccess.Read);

// Reading from a excel file
IExcelDataReader excelreader = ExcelReaderFactory.CreateOpenXmlReader(stream);

// DataSet -- the result of each spreadsheet will be created in the result tables
DataSet result = excelreader.AsDataSet();

int sheetcount = result.Tables.Count;
Debug.Log("sheetcount = " + sheetcount);

for(int m = 0; m < sheetcount; m++)
{
int rows = result.Tables[m].Rows.Count;
int columns = result.Tables[m].Columns.Count;
Debug.Log(string.Format("Table[{0}] with row = {1} columns = {2}", m, rows, columns));

for(int i = 0; i < rows; i++)
{
for(int j = 0; j < columns; j++)
{
string value = result.Tables[m].Rows[i][j].ToString();
Debug.Log(string.Format("result.Tables[{0}].Rows[{1}][{2}] = {3}",m,i,j,value));
}
}
}

excelreader.Close();
}
}

通过上述方法我成功的打印出了AccountPasswordAndGameSetting.xlsx里2个sheet的数据,见下图:
AccountPasswordAndGameSettingSheet1
AccountPasswordAndGameSettingSheet2
ExcelReaderOutput
Note:
上面有用到DataSet,DataSet类是属于System.Data.dll里的,所以我们还必须Copy存放在**\Unity\Editor\Data\Mono\lib\mono\2.0下的System.Data.dll到我们的Dlls目录。

资源管理

详情参见Unity-Resource-Manager

Plugins

详情参见Unity-Plugins

C#脚本

public成员 – 编辑器可见可编辑
[System.Serializable] – 标志该类可以被序列化到Inspector
[HideInInspector] – 标志该成员在编辑器不可见
StartCoroutine(Function()) && yield && WaitForSeconds – 联合使用可以实现对游戏逻辑的等待判断(多线程访问控制)
GameObject.FindGameObjectWithTag(Name) – 用于寻找特定tag的对象
Virtual Method – Virtual Method使该方法可以被重写
abstract – 定义该类是抽象类 && 该方法是抽象方法(需要被子类实现)

Persistence - Saving and Loading Data

PlayerPrefs

Stores and accesses player preferences between game sessions. (用于存储一些不重要的用户设定的数据,比如游戏分辨率,游戏难度等)

IOS里面的.plist文件,Andoid里面的Preference(程序数据文件)

Seralization

What is seralization?

serialization is the process of converting the state an object to a set of bytes in order to store (or transmit) the object into memory, a database or a file.

C# Serialization

GameController.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
using UnityEngine;
using System.Collections;

using System;
using System.Runtime.Serialization.Formatters.Binary;
using System.IO;

using UnityEngine.UI;

public class GameController : MonoBehaviour {

public static GameController mController;

public PlayerData mPlayerData;

public float mPreferenceHealth = 0;

private string mPlayerSavePath;

void Awake()
{
if (mController == null) {
DontDestroyOnLoad (gameObject);
mController = this;
} else if (mController != this) {
Destroy(gameObject);
}

mPlayerData = new PlayerData ();

mPlayerSavePath = Application.persistentDataPath + "/playerInfo.dat";
Debug.Log ("mPlayerSavePath = " + mPlayerSavePath);
}

void OnGUI()
{
GUI.Label (new Rect (10, 10, 200, 30), "Preference Health: " + mPreferenceHealth);
GUI.Label (new Rect (10, 50, 200, 30), "mPlayerData.mHealth: " + mPlayerData.mHealth);

if (GUI.Button (new Rect (10, 90, 120, 30), "Increase Health")) {
mPreferenceHealth += 10;
mPlayerData.mHealth += 10;
}

if (GUI.Button (new Rect (10, 130, 120, 30), "Decrease Health")) {
mPreferenceHealth -= 10;
mPlayerData.mHealth -= 10;
}

if (GUI.Button (new Rect (10, 170, 120, 30), "Save File")) {
Save ();
}

if (GUI.Button (new Rect (10, 210, 120, 30), "Load File")) {
Load ();
}

if (GUI.Button (new Rect (10, 250, 120, 30), "Save Preference")) {
SavePreference ();
}

if (GUI.Button (new Rect (10, 290, 120, 30), "Load Preference")) {
LoadPreference ();
}
}

public void Save()
{
Debug.Log ("Application.persistentDataPath = " + Application.persistentDataPath);
if (!File.Exists (mPlayerSavePath)) {
FileStream fsc = File.Create(mPlayerSavePath);
fsc.Close();
}
BinaryFormatter bf = new BinaryFormatter ();
FileStream fs = File.Open (mPlayerSavePath, FileMode.Open);

bf.Serialize (fs, mPlayerData);
fs.Close ();
}

public void Load()
{
if(File.Exists(mPlayerSavePath))
{
BinaryFormatter bf = new BinaryFormatter();
FileStream fs = File.Open(mPlayerSavePath,FileMode.Open);
mPlayerData = (PlayerData)bf.Deserialize(fs);
fs.Close();
Debug.Log("Load: mPlayerData.mHealth = " + mPlayerData.mHealth);
Debug.Log("Load: mPlayerData.mB.BuildingType = " + mPlayerData.mB.mBT);

}
}

public void SavePreference()
{
//Player preference
PlayerPrefs.SetFloat ("Health", mPreferenceHealth);
}

public void LoadPreference()
{
Debug.Log("Pre Health = " + PlayerPrefs.GetFloat("Health"));
mPreferenceHealth = PlayerPrefs.GetFloat ("Health");
}
}

[Serializable]
public class PlayerData
{
public float mHealth = 100;

public Building mB = new Building();
}

[Serializable]
public enum BuildingType
{
E_WALL = 0
}

[Serializable]
public class Building
{
public BuildingType mBT = BuildingType.E_WALL;
}

Screnn Shot

Note:
Cross Platform except Web

Unity Editor Window, Menu Item & ScriptableObject

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
using UnityEditor;

[Serializable]
public class MyScriptableObject : ScriptableObject
{
public string mID = "1";
}

//Use Editor Window to edit ScriptObject data
public class MyEditorWindow : EditorWindow
{
private static string mValue = null;

[MenuItem("Window/MyEditorWindow")]
static void Init()
{
MyEditorWindow window = (MyEditorWindow)EditorWindow.GetWindow (typeof(MyEditorWindow));
window.Show ();
}

void OnGUI()
{
MyScriptableObject asset = ScriptableObject.CreateInstance<MyScriptableObject> ();
asset = AssetDatabase.LoadAssetAtPath("Assets/MyScriptableObject.asset",typeof(MyScriptableObject)) as MyScriptableObject;
mValue = asset.mID;
Debug.Log ("mValue = " + mValue);

Debug.Log("OnGUI");
GUILayout.Label ("mID", EditorStyles.boldLabel);
string value2 = EditorGUILayout.TextField ("ID",mValue);
if (GUILayout.Button ("Save", EditorStyles.miniButton)) {
Debug.Log("Save Button Clicked");
MyScriptableObject asset2 = ScriptableObject.CreateInstance<MyScriptableObject> ();
asset2.mID = value2;
AssetDatabase.CreateAsset (asset2, "Assets/MyScriptableObject.asset");
AssetDatabase.SaveAssets ();
}
}
}

//Menu Item Study
public class MyMenuItems
{
//Normal Menu Item
[MenuItem("Tools/CreateScriptableAssets %q")]
private static void Save()
{
MyScriptableObject asset = ScriptableObject.CreateInstance<MyScriptableObject> ();
asset.mID = "2";
AssetDatabase.CreateAsset (asset, "Assets/MyScriptableObject.asset");
AssetDatabase.SaveAssets ();
}

[MenuItem("Tools/LoadScriptableAssets %w")]
private static void Load()
{

}

//Context Menu Item
[MenuItem("Assets/ContextMenuItem")]
private static void ContextMenuItem()
{
Debug.Log("ContextMenuItem()");
}

[MenuItem("CONTEXT/Transform/ContextMenuItem")]
private static void ContextMenuItem2()
{
Debug.Log ("ContextMenuItem2");
}

[MenuItem("Assets/Create/ContextMenuItem")]
private static void ContextMenuItem3()
{
Debug.Log ("ContextMenuItem3");
}
}

通过自定义Menu Item的功能,我们可以快速创建一些我们所需要的原件

通过自定义Editor Window,我们可以用于制作特定数据的编辑框

ScriptableObject主要用于存储一些不重要的数据,和Monobehaviour的主要区别就是不用attach到游戏对象上,需要通过CreateInstance的方式来创建

结合自定义Editor Window和ScriptableObject的存储,我们可以通过自定义编辑框自定义数据并使用

Screnn Shot

Coroutine

Reference Website

Why are we using coroutines?

  1. Making things happen step by step
  2. Writing routines that need to happen over time
  3. Writing routines that have to wait for another operation to complete

What is a coroutine?

Coroutines are not threads and coroutines are not asynchronous.

A coroutine is a function that is executed partially and, presuming suitable conditions are met, will be resumed at some point in the future until its work is done.
The start of a coroutine corresponds to the creation of an object of type coroutine. That object is tied to the MonoBehaviour component that hosted the call.

How long is the life cycle? & When does coroutine get called?

The lifetime of the Coroutine object is bound to the lifetime of the MonoBehaviour object, so if the latter gets destroyed during process, the coroutine object is also destroyed. Whenever game object that is bound to coroutine is destroyed or inactive(e.g. gameobject.SetActive(false), Destroy(gameobject)), the coroutine will stop to be called. Coroutine is run until a yield is found.

GameController.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
using UnityEngine;
using System.Collections;

using System;
using System.Runtime.Serialization.Formatters.Binary;
using System.IO;

using UnityEngine.UI;

//For Menu Item
using UnityEditor;

public class GameController : MonoBehaviour {

public static GameController mController;

public GameObject mCoroutineObject;

private float mInputTimer = 0.0f;

public float mValidInputDeltaTime = 0.5f;

void Awake()
{
if (mController == null) {
DontDestroyOnLoad (gameObject);
mController = this;
} else if (mController != this) {
Destroy(gameObject);
}
}

void Update()
{
mInputTimer += Time.deltaTime;

if (mInputTimer > mValidInputDeltaTime) {
if (Input.GetKey (KeyCode.C)) {
mInputTimer = 0.0f;
Debug.Log ("Ative Coroutine Game Object");
mCoroutineObject.SetActive (true);
}
}

if (mInputTimer > mValidInputDeltaTime) {
if (Input.GetKey (KeyCode.U)) {
mInputTimer = 0.0f;
Debug.Log ("UnAtive Coroutine Game Object");
mCoroutineObject.SetActive (false);
}
}

if (mInputTimer > mValidInputDeltaTime) {
if (Input.GetKey (KeyCode.E)) {
mInputTimer = 0.0f;
Debug.Log ("Enable Coroutine MonoBehaviour");
mCoroutineObject.GetComponent<CoroutineStudy>().enabled = true;
}
}

if (mInputTimer > mValidInputDeltaTime) {
if (Input.GetKey (KeyCode.D)) {
mInputTimer = 0.0f;
Debug.Log ("Desable Coroutine MonoBehaviour");
mCoroutineObject.GetComponent<CoroutineStudy>().enabled = false;
}
}

if (mInputTimer > mValidInputDeltaTime) {
if (Input.GetKey (KeyCode.K)) {
mInputTimer = 0.0f;
Debug.Log ("Destroy Coroutine Game Object");
Destroy(mCoroutineObject);
}
}
}
}

CoroutineStudy.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
using UnityEngine;
using System.Collections;

public class CoroutineStudy : MonoBehaviour {

private string mCoroutineText;

private bool isFixedCall = false; //Makesure Update() and LateUpdate() and FixedUpdate() Log only once

private bool isUpdateCall = false;

private bool isLateUpdateCall = false;

void Awake()
{
mCoroutineText = "";
}

void Start()
{

}

void OnGUI()
{
GUI.Label (new Rect (10, 10, 200, 30), "Coroutine Text: " + mCoroutineText);

if (GUI.Button (new Rect (400, 10, 120, 30), "Start Coroutine")) {
mCoroutineText = "";
StartCoroutine (CoroutineCall ());
}
}
void FixedUpdate()
{
if (!isFixedCall)
{
Debug.Log("FixedUpdate Call Begin");
StartCoroutine(FixedCoutine());
Debug.Log("FixedUpdate Call End");
isFixedCall = true;
}
}

IEnumerator FixedCoutine()
{
Debug.Log("This is Fixed Coroutine Call Before");
yield return null;
Debug.Log("This is Fixed Coroutine Call After");
}

void Update()
{
if (!isUpdateCall)
{
Debug.Log("Update Call Begin");
StartCoroutine(UpdateCoutine());
Debug.Log("Update Call End");
isUpdateCall = true;
}
}

IEnumerator UpdateCoutine()
{
Debug.Log("This is Update Coroutine Call Before");
yield return null;
Debug.Log("This is Update Coroutine Call After");
}

void LateUpdate()
{
if (!isLateUpdateCall)
{
Debug.Log("LateUpdate Call Begin");
StartCoroutine(LateCoutine());
Debug.Log("LateUpdate Call End");
isLateUpdateCall = true;
}
}

IEnumerator LateCoutine()
{
Debug.Log("This is Late Coroutine Call Before");
yield return null;
Debug.Log("This is Late Coroutine Call After");
}

private IEnumerator CoroutineCall()
{
for (int i = 1; i <= 20; i++) {
mCoroutineText = i.ToString();
Debug.Log("Coroutine Text: " + mCoroutineText);
yield return new WaitForSeconds(1.0f);
}

mCoroutineText = "Finished";
}
}

ScreenShots
The relationship between Coroutine Life Time and Monobehaviour & GameObejct

The relationship between Coroutine Life Time and Monobehaviour

Note:
Diable Monobehaviour(e.g. Monobehaviour.enabled = false) will not influence coroutine.

Coroutine with yiled return null will get called after LateUpdate()(仅根据上面的测试结果)
通过返回WaitForSeconds() || WaitForEndOfFrame() || WaitForFixedUpdate() 等yield return 支持的返回类型可以实现coroutine在特定时刻继续调用

StartCoroutine()支持嵌套调用用于实现特定等待特定运算后继续执行特定代码

How to stop coroutines?

IEnumerator coroutine = WaitForSeconds(3.0f);
StartCoroutine(coroutine);
StopCoroutine(coroutine);

Note:
注意如果gameobject处于InActive或则has been destroyed,Coroutine不会再被调用,可以通过Monobehaviour.enabled = false来实现使物体不可见但会出发coroutine的效果

Go depth in Coroutine

参考文章:Coroutine,你究竟干了什么?

在更深入了解coroutine之前让我们先了解下什么是IEnumerator和yield?
IEnumrator
Supports a simple iteration over a non-generic collection.
在C#里IEnumerator组要是为了自定义类型支持迭代器访问.

yield
yield是C# 2.0引入的特性,表明调用yield的方法是iterator.每一次的迭代访问该方法,都会调用MoveNext()方法,而调用MoveNext()方法的时候就会执行该方法代码,但如果执行IEnumerator方法到yield的时候,该方法就会返回并且记录下yield的位置,并在下一次调用该方法的时候继续执行。

通过yield break我们可以结束方法迭代。

为什么要提IEnumerator和yield了?
因为Coroutine其实就是一个IEnumerator的迭代器。

在我们调用StartCoroutine的时候迭代器就被我们添加到了coroutine列表中
而Coroutine管理者会每一帧去迭代访问判断每一个coroutine是否达到条件可以删除并继续执行

CoroutineManager.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
using UnityEngine;
using System.Collections;
using System.Collections.Generic;

public class CoroutineYieldInstruction{
public virtual bool IsDone()
{
return true;
}
}

public class CoroutineWaitForSeconds : CoroutineYieldInstruction{
float m_WaitTime;
float m_StartTime;

public CoroutineWaitForSeconds(float waittime)
{
m_WaitTime = waittime;
m_StartTime = -1;
}

public override bool IsDone()
{
if (m_StartTime < 0) {
m_StartTime = Time.time;
}

//check elapsed time
return (Time.time - m_StartTime) >= m_WaitTime;
}
}

public class CoroutineManager : MonoBehaviour {

public static CoroutineManager Instance {
get;

private set;
}

List<System.Collections.IEnumerator> m_Enumerators = new List<System.Collections.IEnumerator>();

List<System.Collections.IEnumerator> m_EnumeratorsBuffer = new List<System.Collections.IEnumerator>();

void Awake()
{
if (Instance == null) {
Instance = this;
} else {
Debug.Log ("Multi-instances of CouroutineManager");
}
}

void LateUpdate()
{
for (int i = 0; i < m_Enumerators.Count; i++) {
//handle special enumerator
if(m_Enumerators[i].Current is CoroutineYieldInstruction)
{
CoroutineYieldInstruction yiledinstruction = m_Enumerators[i] .Current as CoroutineYieldInstruction;
if(!yiledinstruction.IsDone()){
continue;
}
}

//Do normal move next
if(!m_Enumerators[i].MoveNext()){
m_EnumeratorsBuffer.Add(m_Enumerators[i]);
continue;
}
}


//remove end enumerator
for(int i = 0; i < m_EnumeratorsBuffer.Count; i++)
{
m_Enumerators.Remove(m_EnumeratorsBuffer[i]);
}

m_EnumeratorsBuffer.Clear ();
}

public void StartCoroutineSimple(System.Collections.IEnumerator enumerator)
{
m_Enumerators.Add (enumerator);
}
}

ScreenShots
My Own Coroutine

Unity优化注意事项

Code

Use For or while instead of foreach

Foreach在通过Mono编译后会造成额外的内存分配(通过VS编译好像不会 – 这个未测试)
参考网站

ForEachAndFor.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
using UnityEngine;
using System.Collections;
using System.Collections.Generic;

public class ForEachAndFor : MonoBehaviour {

private List<int> mTestList;

// Use this for initialization
void Start () {
mTestList = new List<int>(1000);
for(int i = 0; i < mTestList.Count; i++)
{
mTestList[i] = i;
}
}

// Update is called once per frame
void Update () {

foreach(var it in mTestList)
{

}
/*
for(int i = 0; i < mTestList.Count; i++)
{

}
*/
}
}

ScreenShorts:
ForeachCall
ForCall

从上面的测试结果可以看出foreach确实分配了额外40B的内存开销

让我们看看foreach代码反编译后的样子(我用的ILSpy):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
using System;
using System.Collections.Generic;
using UnityEngine;

public class ForEachAndFor : MonoBehaviour
{
private List<int> mTestList;

private void Start()
{
this.mTestList = new List<int>(1000);
for (int i = 0; i < this.mTestList.get_Count(); i++)
{
this.mTestList.set_Item(i, i);
}
}

private void Update()
{
using (List<int>.Enumerator enumerator = this.mTestList.GetEnumerator())
{
while (enumerator.MoveNext())
{
int current = enumerator.get_Current();
}
}
}
}

从上面看看不出为什么会有内存分配
让我们通过IL反编译工具看生成的IL底层内容(通过IL反编译工具,安装VS就自带的)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
.method private hidebysig instance void  Update() cil managed
{
// 代码大小 55 (0x37)
.maxstack 8
.locals init (int32 V_0,
valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<int32> V_1)
IL_0000: ldarg.0
IL_0001: ldfld class [mscorlib]System.Collections.Generic.List`1<int32> ForEachAndFor::mTestList
IL_0006: callvirt instance valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<!0> class [mscorlib]System.Collections.Generic.List`1<int32>::GetEnumerator()
IL_000b: stloc.1
.try
{
IL_000c: br IL_0019
IL_0011: ldloca.s V_1
IL_0013: call instance !0 valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<int32>::get_Current()
IL_0018: stloc.0
IL_0019: ldloca.s V_1
IL_001b: call instance bool valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<int32>::MoveNext()
IL_0020: brtrue IL_0011
IL_0025: leave IL_0036
} // end .try
finally
{
IL_002a: ldloc.1
IL_002b: box valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<int32>
IL_0030: callvirt instance void [mscorlib]System.IDisposable::Dispose()
IL_0035: endfinally
} // end handler
IL_0036: ret
} // end of method ForEachAndFor::Update

可以看出Mono编译出来的代码在finally进行了一次将valuetype的Enumerator,boxing的过程(这里并不是非常明白底层代码对应的代码,但看大概意思应该是using那里编译后导致的boxing)

关于Box和Unbox参考
从官网可以看出在Boxing Value type的时候会将信息临时存储在Heap上从而造成的额外内存开销

Rendering

减少渲染的物体数量和精度

Physic

Close Unnecessary Collision

(Edit -> Project Settings -> Physics)
只打开需要碰撞检测的Layer之间的碰撞检测

Memory

Avoid Uneccessary GC

尽量重复使用申请的内存,不要每一次都重新申请

游戏相关

Shuffle Bag

在Unity和C#中有Random Class,可以为我们提供伪随机数,但伪随机数毕竟不是真正的随机,并且无法保证各个事件发生的几率,这样一来会导致我们的游戏趣味性大大降低。

一下学习参考:
Shuffle Bags: Making Random() Feel More Random
Never-ending Shuffled Sequences - When Random is too Random
实现参考:
Shuffle bag algorithm implemented in C#
那么什么是Shuffle Bag了?

A Shuffle Bag is a technique for controlling randomness to create the distribution we desire. The idea is:
Pick a range of values with the desired distribution.
Put all these values into a bag.
Shuffle the bag’s contents.
Pull the values out one by one until you reach the end.
Once you reach the end, you start over, pulling the values out one by one again.

从上面可以看出Shuffle Bag主要是通过把所有可能都放到List里,然后通过随机选择在里面选取一个,而被选择过的不记入下一次选择考虑范围内,直到List里所有可能都被选择完为止。
这样一来List里填充的数据就确定了每一个事件发生的概率。
并且只通过一次填充数据就能无限随机选择下去。

代码实现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
using UnityEngine;
using System.Collections;
using System.Collections.Generic;

public class ShuffleBag<T> : ICollection<T>, IList<T>
{
private List<T> mData = new List<T>();

private int mCursor = 0;

private T last;

public T Next()
{
if (mData.Count == 0)
{
return default(T);
}

if (mCursor < 1)
{
mCursor = mData.Count - 1;
if (mData.Count < 1)
{
return default(T);
}
return mData[0];
}

int grab = Mathf.FloorToInt(Random.value * (mCursor + 1));
T temp = mData[grab];
mData[grab] = mData[mCursor];
mData[mCursor] = temp;
mCursor--;
return temp;
}

//IList[T] implementation
public int IndexOf(T item)
{
return mData.IndexOf(item);
}

public void Insert(int index, T item)
{
mData.Insert(index, item);
mCursor = mData.Count - 1;
}

public void RemoveAt(int index)
{
mData.RemoveAt(index);
mCursor = mData.Count - 1;
}

public T this[int index]
{
get
{
return mData[index];
}
set
{
mData[index] = value;
}
}

//IEnumerable[T] implementation
IEnumerator<T> IEnumerable<T>.GetEnumerator()
{
return mData.GetEnumerator();
}

//ICollection[T] implementation
public void Add(T item)
{
mData.Add(item);
mCursor = mData.Count - 1;
}

public int Count
{
get
{
return mData.Count;
}
}

public void Clear()
{
//mCursor = 0;
mData.Clear();
}

public bool Contains(T item)
{
return mData.Contains(item);
}

public void CopyTo(T[] array, int arrayindex)
{
foreach (T item in mData)
{
array.SetValue(item, arrayindex);
arrayindex++;
}
}

public bool Remove(T item)
{
bool removesuccess = mData.Remove(item);
mCursor = mData.Count - 1;
return removesuccess;
}

public bool IsReadOnly
{
get
{
return false;
}
}

//IEnumerable implementation
IEnumerator IEnumerable.GetEnumerator()
{
return mData.GetEnumerator();
}
}

C#学习

Book down load link: c#入门经典第五版

排序算法概念

  1. 时间复杂度 – 指执行算法所需要的计算工作量

  2. 平均时间复杂度 – 理论上一般情况的时间复杂度

  3. 最坏时间复杂度 – 特殊情况下(导致时间耗费最多的数据输入)

  4. 最优时间复杂度 – 特殊情况下(导致时间耗费最少的数据输入)

  5. 空间复杂度 – 指执行算法所需要的内存空间

排序算法


冒泡排序(Sort Algorithm)

基本思想:

  1. 比较相邻的元素。如果第一个比第二个大,就交换他们两个。
  2. 对每一对相邻元素作同样的工作,从开始第一对到结尾的最后一对。这步做完后,最后的元素会是最大的数。
  3. 针对所有的元素重复以上的步骤,除了最后一个。
  4. 持续每次对越来越少的元素重复上面的步骤,直到没有任何一对数字需要比较。

Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
void bubbleSort(int* sortarray)
{
//双重循环都跟数据大小有关
//所以冒泡排序平均时间复杂度是O(square(n))
for(int i = 0; i < intArraySize(sortarray); i++)
{
for(int j = i + 1; j < intArraySize(sortarray); j++)
{
if( sortarray[i] > sortarray[j] )
{
swap_time++;
swap(sortarray[i], sortarray[j]);
}
}
}
}

冒泡排序时间复杂度
平均时间复杂度:O(square(n))
最坏时间复杂度:O(square(n)) (每一次比较都需要交换)
最优时间复杂度:O(n) (第一次循环就完成所有排序而无需进行后面的,需要判断结束条件)

堆排序(Heap Sort)

相关概念

  1. 完全二叉树
    除最后一层外,每一层的节点数均达到最大值;在最后一层上只缺右边的若干结点

  2. ki<=k(2i)且ki<=k(2i+1)(1≤i≤ n/2),当然,这是小根堆,大根堆则换成>=号。//k(i)相当于二叉树的非叶子结点,K(2i)则是左子节点,k(2i+1)是右子节点

基本思想
利用最大堆最小堆的特性,我们可以很容易的拿到最大或最小值,通过构建最大最小堆我们可以得到排序后的值

  1. 最大堆调整(Max_Heapify):将堆的末端子节点作调整,使得子节点永远小于父节点
  2. 创建最大堆(Build_Max_Heap):将堆所有数据重新排序
  3. 堆排序(HeapSort):移除位在第一个数据的根节点,并做最大堆调整的递归运算

Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
//调整堆确保堆是最大堆,这里花O(log(n)),跟堆的深度有关
void heapAdjust(int* sortarray, int parentindex, int length)
{
int max_index = parentindex;
int left_child_index = parentindex * 2 + 1;
int right_child_index = parentindex * 2 + 2;

//Chose biggest one between parent and left&right child
if( left_child_index < length && sortarray[left_child_index] > sortarray[max_index] )
{
max_index = left_child_index;
}

if( right_child_index < length && sortarray[right_child_index] > sortarray[max_index] )
{
max_index = right_child_index;
}

//If any child is bigger than parent,
//then we swap it and do adjust for child again to make sure meet max heap definition
if( max_index != parentindex )
{
swap_time++;
swap(sortarray[max_index], sortarray[parentindex]);
heapAdjust(sortarray, max_index, length);
}
}

//通过初试数据构建最大堆
void buildingHeap(int* sortarray)
{
for( int i = int(intArraySize(sortarray)/2) - 1; i >= 0; i--)
{
//1.2 Adjust heap
//Make sure meet max heap definition
//Max Heap definition:
// (k(i) >= k(2i) && k(i) >= k(2i+1)) (1 <= i <= n/2)
heapAdjust(sortarray, i, intArraySize(sortarray));
}
}

void heapSort(int* sortarray)
{
//Steps:
// 1. Build heap
// 1.1 Init heap
// 1.2 Adjust heap
// 2. Sort heap

//1. Build max heap
// 1.1 Init heap
//Assume we construct max heap
buildingHeap(sortarray);
//2. Sort heap
//这里花O(n),跟数据数量有关
for( int i = intArraySize(sortarray) - 1; i > 0; i-- )
{
//swap first element and last element
//do adjust heap process again to make sure the new array are still max heap
swap(sortarray[i],sortarray[0]);
//Due to we already building max heap before,
//so we just need to adjust for index 0 after we swap first and last element
heapAdjust(sortarray, 0, i);
}
}

堆排序时间复杂度
平均时间复杂度:O(n * log(n))
最坏时间复杂度:O(n * log(n))
最优时间复杂度:O(n * log(n)) (时间复杂度都跟堆的深度和数据长度相关,无可避免的需要去做堆调整和堆排序
所以最坏时间复杂度和最优时间复杂度都是O(n*log(n)

快速排序(Quick Sort)

基本思想

  1. 选择一个值作为基准后,通过比较把小于基准值的值放到基准值左边,大于基准值的放到基准值右边,这样一来就有一个数据放在了正确位置。
  2. 通过分治的思想,将数据组不断细分成小数据组进行基准值的排序直到细分到只剩一个数据为止

Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
int partition(int* sortarray, int l, int r)
{
//choose pivot
int pivot = sortarray[r];
int i = l;
for( int j = l; j < r; j++ )
{
time_complexity++;
if( sortarray[j] <= pivot )
{
swap(sortarray[i],sortarray[j]);
i++;
}
}
swap(sortarray[i], sortarray[r]);
return i;
}

void quicksort(int* sortarray, int low, int high, bool benableoptimize)
{
//partition
//Here we use last element of array as pivot
//recursion
//Pivot chosen to optimize quicksort
// median-of-three
int pivotpos;
int middlepos = (high + 1) / 2;
if( benableoptimize )
{
if( sortarray[low] > sortarray[middlepos] )
{
swap(sortarray[low],sortarray[middlepos]);
}

if( sortarray[middlepos] > sortarray[high] )
{
swap(sortarray[middlepos],sortarray[high]);
}

if( sortarray[low] > sortarray[middlepos] )
{
swap(sortarray[low], sortarray[middlepos]);
}

pivotpos = middlepos;
}
else
{
pivotpos = high;
}

pivotpos=partition(sortarray,low,pivotpos);
quicksort(sortarray,low,pivotpos-1,benableoptimize);
quicksort(sortarray,pivotpos+1,high,benableoptimize);
}

时间复杂度
平均时间复杂度:O(n * log(n)) (细分的时候每一次都对半分)
最坏时间复杂度:O(n * square(n)) (细分的时候每一次都分成一个和其他所有,所以在比较有序数据组面前快排效率不高)

最优时间复杂度:
快速排序可以通过选择更有效的基准值来提高效率
基准值会影响数据组的细分也就是影响后半部分分治的效率,所以最优时间复杂度取决于如何取基准值(上面提到median-of-three方式选基准值就是通过把第一和中间和最后元素排序后选取中间元素作为基准值来确保在细分的时候尽可能满足对半分从而提高效率)

归并排序(Merge Sort)

基本思想

  1. 通过分治思想,把数据组的排列细分成排序多个的小数据组后合并
  2. 通过比较两两有序的小数据组完成合并后的数据排序,然后和其他有序数据组再次进行比较合并,最终完成所有数据的排列和合并

Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
void merge(vector& sortarray, int start, int middle, int end)
{
int left_size = middle - start + 1;
int right_size = end - middle;
vector left;
vector right;
for( int i = 0; i < left_size; i++ )
{
left.push_back(sortarray[start + i]);
}

for( int j = 0; j < right_size; j++ )
{
right.push_back(sortarray[j + middle + 1]);
}

int k = 0;
int l = 0;
int index = start;
//The worst condition is compare n * log(n) - n + 1
//The best condition is compare n * log(n) / 2
while( k < left_size && l < right_size )
{
if( left[k] < right[l] )
{
sortarray[index] = left[k];
k++;
}
else
{
sortarray[index] = right[l];
l++;
}
index++;
}

while( k < left_size )
{
sortarray[index] = left[k];
k++;
index++;
}

while( l < right_size )
{
sortarray[index] = right[l];
l++;
index++;
}
}

void mergesort(vector& sortarray, int start, int end)
{
//sort first (subdivide array)
//then merge (finally merge to together recursively)
if( start < end )
{
int middle = (end + start) / 2;
mergesort(sortarray, start, middle);
mergesort(sortarray, middle + 1, end);
merge(sortarray, start, middle, end);
}
}

时间复杂度
平均时间复杂度:O( n * log(n) )
最优时间复杂度:O( n )(数据组已经有序,只需要把细分数据合并)
最坏时间复杂度:O( n * log(n) )

Note
因为归并排序需要存储细分后的数据组,所以归并排序比较占内存
归并比较次数 n介于 n * log(n) - n + 1 和 n * log(n) / 2
归并赋值次数2n * log(n)
归并平均时间复杂度和最坏时间复杂度都是n * log(n)
相比快速排序归并排序更稳定(最坏情况下),而且更适合局部数据有序的情况

插入排序(Insert Sort)

基本思想
通过构建有序序列,对于未排序数据,在已排序序列中从后向前扫描,找到相应位置并插入

  1. 从第一个元素开始,该元素可以认为已经被排序
  2. 取出下一个元素,在已经排序的元素序列中从后向前扫描
  3. 如果该元素(已排序)大于新元素,将该元素移到下一位置
  4. 重复步骤3,直到找到已排序的元素小于或者等于新元素的位置
  5. 将新元素插入到该位置后
  6. 重复步骤2~5

Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
void insertSort(int* sortarray)
{
int i,j;
int temp;
//从第二个数开始取,依次插入,O(n)
for( i = 1; i < intArraySize(sortarray); i++ )
{
temp = sortarray[i];
//依次跟之前已经排序好的数组元素比较O(n)
for( j = i - 1; j >= 0 && sortarray[j] > temp; j-- )
{
sortarray[j+1] = sortarray[j];
}
sortarray[j+1] = temp;
}
}

时间复杂度
平均时间复杂度:O(square(n))
最坏时间复杂度:O(square(n))
最优时间复杂度:O(n)(数据已经是有序的情况下,不需要比较,直接插入)

总结

各排序算法有各自的优势,适合不同的情况。
相对稳定且时间复杂度低的排序算法:
堆排序,快速排序,归并排序

而归并排序相比快速排序更稳定(最坏情况下),更适合局部数据有序的情况,但归并的内存占用比较多

插入排序虽然时间复杂度比较高,但在已经排好序的数据面前,时间复杂度是O(n)
插入排序适合有序数据组且经常需要插入数据的情况

各排序算法时间复杂度总结:
来源:Visualizing Algorithms
Sort-Algorithem

参考:
8大排序算法图文讲解

推荐网站:
Visualizing Algorithms