码迷,mamicode.com
首页 > 其他好文 > 详细

【译】wikibooks/Silhouette Enhancement

时间:2015-07-08 18:45:51      阅读:195      评论:0      收藏:0      [点我收藏+]

标签:

 
 
技术分享
 
轮廓增强
A semitransparent jellyfish. Note the increased opaqueness at the silhouettes.一直半透明的水母,注意它的轮廓的不透明度是增强了的。

This tutorial covers the transformation of surface normal vectors. It assumes that you are familiar with alpha blending as discussed in Section “Transparency” and with shader properties as discussed in Section “Shading in World Space”.

这篇教程讲的是表面法线的转换,这里假设你已经熟悉在"透明"部分讨论过的alpha混合知识以及在“在世界坐标系中进行着色器编程”部分讨论过的shader属性。

The objective of this tutorial is to achieve an effect that is visible in the photo to the left: the silhouettes of semitransparent objects tend to be more opaque than the rest of the object. This adds to the impression of a three-dimensional shape even without lighting. It turns out that transformed normals are crucial to obtain this effect.

这篇教程的目的是实现左图(这里是上图)中的效果:即半透明物体的轮廓要比剩下的部分更趋于不透明。我们要让这个效果对三维物体也产生影响,即使在没有光照的作用下。事实证明,要获得这样的效果,法线转换是必不可少的。

技术分享
 
Surface normal vectors (for short: normals) on a surface patch.面片上的表面法线。

Silhouettes of Smooth Surfaces[edit]光滑表面的轮廓

In the case of smooth surfaces, points on the surface at silhouettes are characterized by normal vectors that are parallel to the viewing plane and therefore orthogonal to the direction to the viewer.In the figure to the left, the blue normal vectors at the silhouette at the top of the figure are parallel to the viewing plane while the other normal vectors point more in the direction to the viewer (or camera). By calculating the direction to the viewer and the normal vector and testing whether they are (almost) orthogonal to each other, we can therefore test whether a point is (almost) on the silhouette.

光滑表面的轮廓上的点的特征是法线,法线平行于视平面因此也垂直于显示器的方向。在左图(这里是上图)中,位于图上方的,面片轮廓上的蓝色的法线平行于视平面,而其他的法线更指向显示器(或摄像机)。通过计算显示器与法线之间的方位,以及考查他们是否(大致)相互垂直,我们可以测试一个点是否(大概)处于轮廓上。

More specifically, if V is the normalized (i.e. of length 1) direction to the viewer and N is the normalized surface normal vector, then the two vectors are orthogonal if the dot product is 0:V·N = 0. In practice, this will rarely be the case. However, if the dot product V·N is close to 0, we can assume that the point is close to a silhouette.

具体来说,如果V来表示显示器的单位(方向向量长度为1)方向,N表示单位法线,当两者的点积V·N=0时,两者相互垂直,事实上,很少会出现这种情况(这里指刚好等于0),尽管这样,我们可以通过他们的点积是否接近0来假定这个点是否在轮廓上。

Increasing the Opacity at Silhouettes[edit]

提高轮廓不透明度

For our effect, we should therefore increase the opacity 技术分享 if the dot product V·N is close to 0. There are various ways to increase the opacity for small dot products between the direction to the viewer and the normal vector. Here is one of them (which actually has a physical model behind it, which is described in Section 5.1 of this publication) to compute the increased opacity 技术分享 from the regular opacity 技术分享 of the material:

技术分享

为了实现我们要的效果,我们在点积V·N的值接近0的时候提高相应点的不透明度技术分享,有很多种提高不透明度的方法,其中一种(实际包含一个物理模型,如这本出版书5.1节中描述的那样)是通过材质的正常透明度技术分享 去计算增长后的不透明度技术分享

技术分享

It always makes sense to check the extreme cases of an equation like this. Consider the case of a point close to the silhouette: V·N ≈ 0. In this case, the regular opacity 技术分享 will be divided by a small, positive number. (Note that GPUs usually handle the case of division by zero gracefully; thus, we don‘t have to worry about it.) Therefore, whatever 技术分享 is, the ratio of 技术分享 and a small positive number, will be larger. The 技术分享function will take care that the resulting opacity 技术分享 is never larger than 1.

考虑到V·N ≈ 0.这种情况的存在,像这样去核查方程的极限情况是很有意义的。在这种情况下,正常的不透明度技术分享 会除以一个很小的正数(注意到GPU会优雅地处理除数为0,因此我们不用担心这个),所以不论技术分享多大,技术分享与小正数的比值会很大。min函数确保不透明度结果不会大于1。

On the other hand, for points far away from the silhouette we have V·N ≈ 1. In this case, α‘ ≈ min(1, α) ≈ α; i.e., the opacity of those points will not change much. This is exactly what we want. Thus, we have just checked that the equation is at least plausible.

另一方面,对于远离轮廓的点,有V·N ≈ 1,此时,α‘ ≈ min(1, α) ≈ α;也就是说,这些点的不透明度不会发生太大变化。这正是我们想要的,因此,我们刚刚核实了方程至少是合理的。

Implementing an Equation in a Shader[edit]

在shader中实现方程

In order to implement an equation like the one for 技术分享 in a shader, the first question should be: Should it be implemented in the vertex shader or in the fragment shader? In some cases, the answer is clear because the implementation requires texture mapping, which is often only available in the fragment shader. In many cases, however, there is no general answer. Implementations in vertex shaders tend to be faster (because there are usually fewer vertices than fragments) but of lower image quality (because normal vectors and other vertex attributes can change abruptly between vertices). Thus, if you are most concerned about performance, an implementation in a vertex shader is probably a better choice. On the other hand, if you are most concerned about image quality, an implementation in a pixel shader might be a better choice. The same trade-off exists between per-vertex lighting (i.e. Gouraud shading, which is discussed in Section “Specular Highlights”) and per-fragment lighting (i.e. Phong shading, which is discussed in Section “Smooth Specular Highlights”).

为了在shader中实现如上的方程,第一个问题是:方程应该是在顶点着色器中实现还是在片段着色器中实现?有些情况下,答案是明确的,因为实现中需要纹理映射,而这仅在片段着色器中有效。然而在更多情况下,这并没有定论。在顶点着色器中实现会更快(因为相比片段着色器顶点要少)但获得的图像质量会较低(因为顶点的法线和其他属性会突然发生改变)。因此,如果你更关心效率,在顶点着色器中实现方程是一个更好的选择,相反的,如果你更在意图像质量,在片段着色器中实现会更好一些。这样的权衡也存在于逐顶点光照(如Gouraud着色,Section “Specular Highlights”中讨论过的)和逐像素光照(如Phong 着色,在Section “Smooth Specular Highlights”中讨论)之间。

The next question is: in which coordinate system should the equation be implemented? (See Section “Vertex Transformations” for a description of the standard coordinate systems.) Again, there is no general answer. However, an implementation in world coordinates is often a good choice in Unity because many uniform variables are specified in world coordinates. (In other environments implementations in view coordinates are very common.)

另一个问题是:在哪个坐标系中实现这个方程?(参见Section “Vertex Transformations”基本坐标系介绍),同样这也没有定论。但在Unity中世界坐标系是一个好的选择,因为很多uniform变量都指定在世界坐标系中。(在其他一些环境中,在观察坐标系中实现是很常见的)

The final question before implementing an equation is: where do we get the parameters of the equation from? The regular opacity 技术分享 is specified (within a RGBA color) by a shader property (see Section “Shading in World Space”). The normal vector gl_Normal is a standard vertex input parameter (see Section “Debugging of Shaders”). The direction to the viewer can be computed in the vertex shader as the vector from the vertex position in world space to the camera position in world space _WorldSpaceCameraPos, which is provided by Unity.

在我们实现方程前,还有最后一个问题:方程的参数从哪来?正常的不透明(一个rgba 色值)度由shader 属性指定(参看 Section “Shading in World Space”)。法线 gl_Normal是标准顶点输入(参看Section “Debugging of Shaders”)。观察者的方向可以在顶点着色器中计算,即顶点的世界坐标与Unity提供的相机的世界坐标_WorldSpaceCameraPos构成的向量。

Thus, we only have to transform the vertex position and the normal vector into world space before implementing the equation. The transformation matrix _Object2World from object space to world space and its inverse _World2Object are provided by Unity as discussed in Section “Shading in World Space”. The application of transformation matrices to points and normal vectors is discussed in detail in Section “Applying Matrix Transformations”. The basic result is that points and directions are transformed just by multiplying them with the transformation matrix, e.g. with modelMatrix set to _Object2World:

            output.viewDir = normalize(_WorldSpaceCameraPos 
               - mul(modelMatrix, input.vertex).xyz);

所以,在实现方程前,我们只需要将顶点坐标和法线转换到世界坐标系。转换矩阵_Object2World 将坐标由物体坐标系转换到世界坐标系,它和它逆矩阵_World2Object 由Unity提供(在Section “Shading in World Space”中论述过),点与法线的转换矩阵的应用在Section “Applying Matrix Transformations”中有详细讨论。基本结果只是将点和方向与转换矩阵相乘来转换。例如,用modelMatrix设置成_Object2World:

      

          output.viewDir = normalize(_WorldSpaceCameraPos 
               - mul(modelMatrix, input.vertex).xyz);

On the other hand normal vectors are transformed by multiplying them with the transposed inverse transformation matrix. Since Unity provides us with the inverse transformation matrix (which is _World2Object * unity_Scale.w apart from the bottom-right element), a better alternative is to multiply the normal vector from the left to the inverse matrix, which is equivalent to multiplying it from the right to the transposed inverse matrix as discussed in Section “Applying Matrix Transformations”:

            output.normal = normalize(
               mul(float4(input.normal, 0.0), modelMatrixInverse).xyz);
另外,法线是通过乘以转置矩阵的逆来变换的。由于Unity提供了逆转换矩阵(_World2Object * unity_Scale.w 除掉右下角的元素),一个更好的选择是左乘矩阵的逆,这等价于右乘一个转置逆矩阵(在Section “Applying Matrix Transformations”中讨论)。

Note that the incorrect bottom-right matrix element is no problem because it is always multiplied with 0. Moreover, the multiplication with unity_Scale.w is not necessary since the scaling doesn‘t matter because we normalize the vector.

Now we have all the pieces that we need to write the shader.

错误的矩阵右下角元素并不能会造成问题,因为它始终是乘以0。此外,与unity_Scale.w相乘也不是必须的,因为我们用normalize 标准化了向量尺寸。

现在我们准备好了编写这个shader所需要的一切!

Shader Code[edit]

Shader "Cg silhouette enhancement" {
   Properties {
      _Color ("Color", Color) = (1, 1, 1, 0.5) 
         // user-specified RGBA color including opacity
   }
   SubShader {
      Tags { "Queue" = "Transparent" } 
         // draw after all opaque geometry has been drawn
      Pass { 
         ZWrite Off // don‘t occlude other objects
         Blend SrcAlpha OneMinusSrcAlpha // standard alpha blending
 
         CGPROGRAM 
 
         #pragma vertex vert  
         #pragma fragment frag 
 
         #include "UnityCG.cginc"
 
         uniform float4 _Color; // define shader property for shaders
 
         struct vertexInput {
            float4 vertex : POSITION;
            float3 normal : NORMAL;
         };
         struct vertexOutput {
            float4 pos : SV_POSITION;
            float3 normal : TEXCOORD;
            float3 viewDir : TEXCOORD1;
         };
 
         vertexOutput vert(vertexInput input) 
         {
            vertexOutput output;
 
            float4x4 modelMatrix = _Object2World;
            float4x4 modelMatrixInverse = _World2Object; 
               // multiplication with unity_Scale.w is unnecessary 
               // because we normalize transformed vectors
 
            output.normal = normalize(
               mul(float4(input.normal, 0.0), modelMatrixInverse).xyz);
            output.viewDir = normalize(_WorldSpaceCameraPos 
               - mul(modelMatrix, input.vertex).xyz);
 
            output.pos = mul(UNITY_MATRIX_MVP, input.vertex);
            return output;
         }
 
         float4 frag(vertexOutput input) : COLOR
         {
            float3 normalDirection = normalize(input.normal);
            float3 viewDirection = normalize(input.viewDir);
 
            float newOpacity = min(1.0, _Color.a 
               / abs(dot(viewDirection, normalDirection)));
            return float4(_Color.rgb, newOpacity);
         }
 
         ENDCG
      }
   }
}

后面的不翻了。

The assignment to newOpacity is an almost literal translation of the equation

技术分享

Note that we normalize the vertex output parameters output.normal and output.viewDir in the vertex shader (because we want to interpolate between directions without putting more nor less weight on any of them) and at the begin of the fragment shader (because the interpolation can distort our normalization to a certain degree). However, in many cases the normalization of output.normal in the vertex shader is not necessary. Similarly, the normalization ofoutput.viewDir in the fragment shader is in most cases unnecessary.

More Artistic Control[edit]

While the described silhouette enhancement is based on a physical model, it lacks artistic control; i.e., a CG artist cannot easily create a thinner or thicker silhouette than the physical model suggests. To allow for more artistic control, you could introduce another (positive) floating-point number property and take the dot product |V·N| to the power of this number (using the built-in Cg function pow(float x, float y)) before using it in the equation above. This will allow CG artists to create thinner or thicker silhouettes independently of the opacity of the base color.

Summary[edit]

Congratulations, you have finished this tutorial. We have discussed:

  • How to find silhouettes of smooth surfaces (using the dot product of the normal vector and the view direction).
  • How to enhance the opacity at those silhouettes.
  • How to implement equations in shaders.
  • How to transform points and normal vectors from object space to world space (using the transposed inverse model matrix for normal vectors).
  • How to compute the viewing direction (as the difference from the camera position to the vertex position).
  • How to interpolate normalized directions (i.e. normalize twice: in the vertex shader and the fragment shader).
  • How to provide more artistic control over the thickness of silhouettes .

Further Reading[edit]

If you still want to know more

【译】wikibooks/Silhouette Enhancement

标签:

原文地址:http://www.cnblogs.com/jackmaxwell/p/4630726.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!