文章目录
  1. 1. 前言
  2. 2. 摄像机
    1. 2.1. 摄像机照射区域
    2. 2.2. 摄像机照射区域可视化
    3. 2.3. 摄像机照射区域动态刷新
    4. 2.4. 重点知识
  3. 3. 博客
  4. 4. GitHub
  5. 5. Reference

前言

一直以来除了学图形学渲染的时候接触到摄像机的坐标系转换,平时对于摄像机照射范围的原理知之甚少。当我们在做可视区域判定时(比如SLG游戏只会创建可视区域范围内的东西),我们需要计算出当前摄像机的照射范围,然后动态请求可视化区域内需要新增显示的地图对象。实现可视区域动态请求显示这一步离不开对摄像机可视区域的计算。本文正是为了深入解决可视区域计算,深入理解摄像机照射原理,实现一套可视化摄像机照射区域和动态可视区域物体动态创建的工具。

摄像机

摄像机目前分为透视摄像机正交摄像机,前者有深度概念有透视效果会近大远小,后者忽略深度类似是把3D投射到2D平面的效果。

首先我们来看一下透视摄像机的投影图:

PerspectiveCameraPreview

其中FOV是透视摄像机里很重要的一个参数,Fov指的是相机视场(Field of View)张角。

FOV指的是图中的DFOV,即对角线视场角

VFOV指的是垂直视场角,VFOV是受屏幕高度影响

HFOV指的是水平视场角,HFOV是受屏幕宽度影响

摄像机照射区域

摄像机的可见范围计算原理:

  • 不考虑摄像机是正交还是透视,通过将屏幕四个角映射到世界坐标系(屏幕坐标到世界坐标),得到从摄像机到摄像机四个角的射线数据
  • 计算四条射线到在指定平面的交叉点计算,得出的4个交叉点构成的形状就是我们要的透视摄像机在指定平面上的投影形状

仅仅是需要知道透视摄像机四条射线的前提下,我们不需要自己去计算DFOV,VFOV,HFOV等数据。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
/// <summary>
/// 四个角的视口坐标
/// </summary>
private static Vector3[] ViewPortPoints = new Vector3[5]
{
new Vector3(0, 0, 0), // 左下角
new Vector3(0, 1, 0), // 左上角
new Vector3(1, 1, 0), // 右上角
new Vector3(1, 0, 0), // 右下角
new Vector3(0.5f, 0.5f, 0), // 中心
};

/// <summary>
/// 获取指定摄像机的射线数据列表
/// </summary>
/// <param name="camera"></param>
/// <param name="rayCastDataList"></param>
/// <returns></returns>
public static bool GetCameraRayCastDataList(Camera camera, ref List<KeyValuePair<Vector3, Vector3>> rayCastDataList)
{
rayCastDataList.Clear();
if(camera == null)
{
Debug.LogError($"不允许传递空摄像机组件,获取摄像机的射线数据列表失败!");
return false;
}

var cameraNearClipPlane = camera.nearClipPlane;
ViewPortPoints[0].z = cameraNearClipPlane;
ViewPortPoints[1].z = cameraNearClipPlane;
ViewPortPoints[2].z = cameraNearClipPlane;
ViewPortPoints[3].z = cameraNearClipPlane;
ViewPortPoints[4].z = cameraNearClipPlane;

var isOrthographic = camera.orthographic;
if(isOrthographic)
{
// 转换为射线
for (int i = 0; i < ViewPortPoints.Length; i++)
{
Ray ray = camera.ViewportPointToRay(ViewPortPoints[i]);
var rayCastToPoint = ray.origin + ray.direction * camera.farClipPlane;
var rayCastData = new KeyValuePair<Vector3, Vector3>(ray.origin, rayCastToPoint);
rayCastDataList.Add(rayCastData);
}
}
else
{
var radio = camera.farClipPlane / cameraNearClipPlane;
var cameraPosition = camera.transform.position;

// 获取饰扣四个角的屏幕映射世界坐标
var lbNearPlaneWorldPoints = camera.ViewportToWorldPoint(ViewPortPoints[0]);
var ltNearPlaneWorldPoints = camera.ViewportToWorldPoint(ViewPortPoints[1]);
var rtNearPlaneWorldPoints = camera.ViewportToWorldPoint(ViewPortPoints[2]);
var rbNearPlaneWorldPoints = camera.ViewportToWorldPoint(ViewPortPoints[3]);
var ctNearPlaneWorldPoints = camera.ViewportToWorldPoint(ViewPortPoints[4]);

var lbNearPlaneCameraWorldPointDir = lbNearPlaneWorldPoints - cameraPosition;
var ltNearPlaneCameraWorldPointDir = ltNearPlaneWorldPoints - cameraPosition;
var rtNearPlaneCameraWorldPointDir = rtNearPlaneWorldPoints - cameraPosition;
var rbNearPlaneCameraWorldPointDir = rbNearPlaneWorldPoints - cameraPosition;
var ctNearPlaneCameraWorldPointDir = ctNearPlaneWorldPoints - cameraPosition;

var lbFarPlaneWorldPoint = cameraPosition + lbNearPlaneCameraWorldPointDir * radio;
var ltFarPlaneWorldPoint = cameraPosition + ltNearPlaneCameraWorldPointDir * radio;
var rtFarPlaneWorldPoint = cameraPosition + rtNearPlaneCameraWorldPointDir * radio;
var rbFarPlaneWorldPoint = cameraPosition + rbNearPlaneCameraWorldPointDir * radio;
var ctFarPlaneWorldPoint = cameraPosition + ctNearPlaneCameraWorldPointDir * radio;

var lbRayCastData = new KeyValuePair<Vector3, Vector3>(lbNearPlaneWorldPoints, lbFarPlaneWorldPoint);
var ltRayCastData = new KeyValuePair<Vector3, Vector3>(ltNearPlaneWorldPoints, ltFarPlaneWorldPoint);
var rtRayCastData = new KeyValuePair<Vector3, Vector3>(rtNearPlaneWorldPoints, rtFarPlaneWorldPoint);
var rbRayCastData = new KeyValuePair<Vector3, Vector3>(rbNearPlaneWorldPoints, rbFarPlaneWorldPoint);
var ctRayCastData = new KeyValuePair<Vector3, Vector3>(ctNearPlaneWorldPoints, ctFarPlaneWorldPoint);
rayCastDataList.Add(lbRayCastData);
rayCastDataList.Add(ltRayCastData);
rayCastDataList.Add(rtRayCastData);
rayCastDataList.Add(rbRayCastData);
rayCastDataList.Add(ctRayCastData);
}
return true;
}
  • 得到屏幕四个角映射的射线数据后,利用射线和平面的交叉计算得到指定平面交叉的点就能得到我们要的平面照射区域
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
/// <summary>
/// Vector3Utilities.cs
/// Vector3静态工具类
/// </summary>
public static class Vector3Utilities
{
/// <summary>
/// 获取指定射线和平面数据的交叉点(返回null表示没有交叉点)
/// </summary>
/// <param name="rayOrigin"></param>
/// <param name="rayDirection"></param>
/// <param name="planePoint"></param>
/// <param name="planeNormal"></param>
/// <returns></returns>
public static Vector3? GetRayAndPlaneIntersect(Vector3 rayOrigin, Vector3 rayDirection, Vector3 planePoint, Vector3 planeNormal)
{
// 计算法向量和方向向量的点积
float ndotu = Vector3.Dot(planeNormal, rayDirection);

// 向量几乎平行,可能没有交点或者射线在平面内
if (Mathf.Approximately(Math.Abs(ndotu), Mathf.Epsilon))
{
return null;
}

// 计算 t
Vector3 w = rayOrigin - planePoint;
float t = -Vector3.Dot(planeNormal, w) / ndotu;

// 交点在射线起点的后面
if (t < 0)
{
return null;
}

// 计算交点
Vector3 intersectionPoint = rayOrigin + t * rayDirection;
return intersectionPoint;
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
/// <summary>
/// 获取指定摄像机指定区域顶点和法线的的可视区域顶点数据
/// </summary>
/// <param name="camera"></param>
/// <param name="areaPoint"></param>
/// <param name="areaNormal"></param>
/// <param name="areaPointsList"></param>
/// <param name="rectPointList"></param>
/// <returns></returns>
public static bool GetCameraVisibleArea(Camera camera, Vector3 areaPoint, Vector3 areaNormal, ref List<Vector3> areaPointsList, ref List<Vector3> rectPointList)
{
areaPointsList.Clear();
rectPointList.Clear();
if (camera == null)
{
Debug.LogError($"不允许传递空摄像机组件,获取摄像机的可视区域顶点数据失败!");
return false;
}

RayCastDataList.Clear();
GetCameraRayCastDataList(camera, ref RayCastDataList);
foreach(var rayCastData in RayCastDataList)
{
var rayCastDirection = rayCastData.Value - rayCastData.Key;
var areaData = Vector3Utilities.GetRayAndPlaneIntersect(rayCastData.Key, rayCastDirection, areaPoint, areaNormal);
if(areaData != null)
{
areaPointsList.Add((Vector3)areaData);
}
}

rectPointList.Add(new Vector3(areaPointsList[1].x, areaPointsList[0].y, areaPointsList[0].z));
rectPointList.Add(new Vector3(areaPointsList[1].x, areaPointsList[1].y, areaPointsList[1].z));
rectPointList.Add(new Vector3(areaPointsList[2].x, areaPointsList[2].y, areaPointsList[2].z));
rectPointList.Add(new Vector3(areaPointsList[2].x, areaPointsList[3].y, areaPointsList[3].z));
rectPointList.Add(areaPointsList[4]);
return true;
}

从上面可以看到无论是areaPointsList还是rectPointList都返回了5个顶点数据,第五个是屏幕中心映射的点。

之所以返回矩形的映射区域,是为了简化后面透视摄像机梯形判定顶点复杂度,简化成AABB和点的交叉判定。

摄像机照射区域可视化

得到了屏幕映射的顶点数据,通过构建线条数据,我们就能利用GUI相关接口画出可视化可见区域了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
/// <summary>
/// 区域顶点
/// </summary>
[Header("区域顶点")]
public Vector3 AreaPoint = Vector3.zero;

/// <summary>
/// 区域法线
/// </summary>
[Header("区域法线")]
public Vector3 AreaNormal = Vector3.up;

/// <summary>
/// 更新摄像机指定平面映射区域数据
/// </summary>
public void UpdateAreaDatas()
{
CameraUtilities.GetCameraRayCastDataList(mCameraComponent, ref mRayCastDataList);
CameraUtilities.GetCameraVisibleArea(mCameraComponent, AreaPoint, AreaNormal, ref mAreaPointsList, ref mRectAreaPointsList);
mAreaLinesList.Clear();
mRectAreaLinesList.Clear();
if(mAreaPointsList.Count > 0)
{
var lbToLtLine = new KeyValuePair<Vector3, Vector3>(mAreaPointsList[0], mAreaPointsList[1]);
var ltToRtLine = new KeyValuePair<Vector3, Vector3>(mAreaPointsList[1], mAreaPointsList[2]);
var rtToRbLine = new KeyValuePair<Vector3, Vector3>(mAreaPointsList[2], mAreaPointsList[3]);
var rbToLbLine = new KeyValuePair<Vector3, Vector3>(mAreaPointsList[3], mAreaPointsList[0]);
mAreaLinesList.Add(lbToLtLine);
mAreaLinesList.Add(ltToRtLine);
mAreaLinesList.Add(rtToRbLine);
mAreaLinesList.Add(rbToLbLine);

var rectLbToLtLine = new KeyValuePair<Vector3, Vector3>(mRectAreaPointsList[0], mRectAreaPointsList[1]);
var rectLtToRtLine = new KeyValuePair<Vector3, Vector3>(mRectAreaPointsList[1], mRectAreaPointsList[2]);
var rectRtToRbLine = new KeyValuePair<Vector3, Vector3>(mRectAreaPointsList[2], mRectAreaPointsList[3]);
var rectRbToLbLine = new KeyValuePair<Vector3, Vector3>(mRectAreaPointsList[3], mRectAreaPointsList[0]);
mRectAreaLinesList.Add(rectLbToLtLine);
mRectAreaLinesList.Add(rectLtToRtLine);
mRectAreaLinesList.Add(rectRtToRbLine);
mRectAreaLinesList.Add(rectRbToLbLine);
}
}

/// <summary>
/// 绘制区域信息Gizmos
/// </summary>
private void DrawAreaInfoGUI()
{
var preGizmosColor = Gizmos.color;
Gizmos.color = Color.green;
Gizmos.DrawSphere(AreaPoint, SphereSize);
var areaPointTo = AreaPoint + AreaNormal * 5;
Gizmos.DrawLine(AreaPoint, areaPointTo);
Gizmos.color = preGizmosColor;
}

/// <summary>
/// 绘制矩形区域
/// </summary>
private void DrawRectAreaGUI()
{
var preGizmosColor = Gizmos.color;
Gizmos.color = new Color(0, 0.5f, 0);
if(mRectAreaLinesList.Count > 0)
{
Gizmos.DrawLine(mRectAreaLinesList[0].Key, mRectAreaLinesList[0].Value);
Gizmos.DrawLine(mRectAreaLinesList[1].Key, mRectAreaLinesList[1].Value);
Gizmos.DrawLine(mRectAreaLinesList[2].Key, mRectAreaLinesList[2].Value);
Gizmos.DrawLine(mRectAreaLinesList[3].Key, mRectAreaLinesList[3].Value);
}
Gizmos.color = preGizmosColor;
}

/// <summary>
/// 绘制区域Gizmos
/// </summary>
private void DrawAreaGUI()
{
var preGizmosColor = Gizmos.color;
Gizmos.color = Color.green;
if(mAreaLinesList.Count > 0)
{
Gizmos.DrawLine(mAreaLinesList[0].Key, mAreaLinesList[0].Value);
Gizmos.DrawLine(mAreaLinesList[1].Key, mAreaLinesList[1].Value);
Gizmos.DrawLine(mAreaLinesList[2].Key, mAreaLinesList[2].Value);
Gizmos.DrawLine(mAreaLinesList[3].Key, mAreaLinesList[3].Value);
}
if(mAreaPointsList.Count > 0)
{
Gizmos.DrawSphere(mAreaPointsList[4], SphereSize);
}
Gizmos.color = preGizmosColor;
}

摄像机照射区域可视化:

CameraAreaVisualization

可以看到黄色是远截面照射区域,深绿色是近截面照射区域,浅绿色是摄像机近截面换算成矩形后的区域,红色线条是摄像机近截面和远截面射线,

透视摄像机效果如下:

CameraRayCastVisualization

正交摄像机效果如下:

OrthographicCameraVisualization

摄像机照射区域动态刷新

动态区域刷新核心要点:

  1. 映射屏幕4个点得到四条摄像机照射射线,通过计算4条射线与指定平面的交叉点得到照射区域,简化照射区域形状到矩形,将物体是否在照射区域转换成点与AABB交叉判定

实战摄像机照射区域动态刷新参考地图编辑器:

MapEditor

实战效果图:

DynamicCreateMapData1

DynamicCreateMapData2

Note:

  1. 屏幕4个顶点计算顺序依次是左下,左上,右上,右下

重点知识

  1. 不考虑摄像机是正交还是透视,通过将屏幕四个角映射到世界坐标系(屏幕坐标到世界坐标),得到从摄像机到摄像机四个角的射线数据,然后将摄像机区域计算转换成计算四条射线到在指定平面的交叉点计算,得出的4个交叉点构成的形状就是我们要的透视摄像机在指定平面上的投影形状
  2. FOV分为DFOV(对角线视场角),VFOV(垂直视场角,受屏幕高度影响)和HFOV(水平视场角,受屏幕宽度影响)
  3. 透视摄像机看多少不仅受FOV影响还受屏幕分辨率影响
  4. 摄像机指定位置平面照射形状可以转换成摄像机4个射线和平面的相交检测处理

博客

地图编辑器

GitHub

MapEditor

Reference

根据相机参数计算视场角(Filed of View, FOV)

摄像与平面的相交检测(Ray-Plane intersection test)

浅析相机FOV

Fov数值推荐设置

文章目录
  1. 1. 前言
  2. 2. 摄像机
    1. 2.1. 摄像机照射区域
    2. 2.2. 摄像机照射区域可视化
    3. 2.3. 摄像机照射区域动态刷新
    4. 2.4. 重点知识
  3. 3. 博客
  4. 4. GitHub
  5. 5. Reference