glsl - Issues when simulating directional light in OpenGL -


i'm working on opengl application using qt5 gui framework, however, i'm not expert in opengl , i'm facing couple of issues when trying simulate directional light. i'm using 'almost' same algorithm used in webgl application works fine.

the application used render multiple adjacent cells of large gridblock (each of represented 8 independent vertices) meaning vertices of whole gridblock duplicated in vbo. normals calculated per face in geometry shader shown below in code.

qopenglwidget paintgl() body.

void openglwidget::paintgl() {     glclear(gl_color_buffer_bit | gl_depth_buffer_bit);     glenable(gl_depth_test);     glenable(gl_cull_face);              m_camera = camera.tomatrix();      m_world.settoidentity();       m_program->bind();     m_program->setuniformvalue(m_projmatrixloc, m_proj);     m_program->setuniformvalue(m_mvmatrixloc, m_camera * m_world);       qmatrix3x3 normalmatrix = (m_camera * m_world).normalmatrix();     m_program->setuniformvalue(m_normalmatrixloc, normalmatrix);      qvector3d lightdirection = qvector3d(1,1,1);     lightdirection.normalize();     qvector3d directionalcolor = qvector3d(1,1,1);     qvector3d ambientlight = qvector3d(0.2,0.2,0.2);     m_program->setuniformvalue(m_lightdirectionloc, lightdirection);     m_program->setuniformvalue(m_directionalcolorloc, directionalcolor);     m_program->setuniformvalue(m_ambientcolorloc, ambientlight);      geometries->drawgeometry(m_program);     m_program->release();     } } 

vertex shader

#version 330 layout(location = 0) in vec4 vertex;  uniform mat4 projmatrix; uniform mat4 mvmatrix;  void main() {   gl_position = projmatrix * mvmatrix * vertex; } 

geometry shader

#version 330 layout ( triangles ) in; layout ( triangle_strip, max_vertices = 3 ) out; out vec3 transformednormal; uniform mat3 normalmatrix;  void main() {     vec3 = gl_in[2].gl_position.xyz - gl_in[0].gl_position.xyz;     vec3 b = gl_in[1].gl_position.xyz - gl_in[0].gl_position.xyz;      gl_position = gl_in[0].gl_position;     transformednormal = normalmatrix * normalize(cross(a,b));     emitvertex();     gl_position = gl_in[1].gl_position;     transformednormal = normalmatrix * normalize(cross(a,b));     emitvertex();      gl_position = gl_in[2].gl_position;     transformednormal = normalmatrix * normalize(cross(a,b));     emitvertex();     endprimitive(); } 

fragment shader

#version 330 in vec3 transformednormal; out vec4 fcolor; uniform vec3 lightdirection; uniform vec3 ambientcolor; uniform vec3 directionalcolor;  void main() {         highp float directionallightweighting = max(dot(transformednormal, lightdirection), 0.0);         vec3 vlightweighting = ambientcolor + directionalcolor * directionallightweighting;         highp vec3 color = vec3(1, 1, 0.0);         fcolor = vec4(color*vlightweighting, 1.0); } 

the 1st issue lighting on faces seems change whenever camera angle changes (camera location doesn't affect it, angle). can see behavior in following snapshot. guess i'm doing wrong when calculating normal matrix, can't figure out is. enter image description here

the 2nd issue (the 1 causing me headaches) whenever camera moved, edges of cells show blocky , rigged lines flickers when camera moves around. effect gets nasty when there many cells clustered together.

enter image description here

the model used in snapshot sample slab of 10 cells better illustrate faulty effects. actual models (gridblock) contain 200k cells stacked together.

edit: 2nd issue solution. using znear/zfar of 0.01f , 50000.0f respecticvely, when changed znear 1.0f, effect disappeared. according opengl wiki caused znear clipping plane value that's close 0.0. znear clipping plane set increasingly closer 0.0, effective precision of depth buffer decreases dramatically

edit2: tried debug drawing normals suggested in comments, realized shouldn't calculate them based on gl_position (after mvp matrix multiplication in vs) instead should use original vertex locations, modified the shaders follows:

vertex shader (updated)

#version 330 layout(location = 0) in vec4 vertex;  out vec3 vert; uniform mat4 projmatrix; uniform mat4 mvmatrix;  void main() {   vert = vertex.xyz;   gl_position = projmatrix * mvmatrix * vertex; } 

geometry shader (updated)

#version 330 layout ( triangles ) in; layout ( triangle_strip, max_vertices = 3 ) out; in vec3 vert []; out vec3 transformednormal; uniform mat3 normalmatrix;  void main() {     vec3 = vert[2].xyz - vert[0].xyz;     vec3 b = vert[1].xyz - vert[0].xyz;      gl_position = gl_in[0].gl_position;     transformednormal = normalize(normalmatrix * normalize(cross(a,b)));     emitvertex();     gl_position = gl_in[1].gl_position;     transformednormal = normalize(normalmatrix * normalize(cross(a,b)));     emitvertex();      gl_position = gl_in[2].gl_position;     transformednormal = normalize(normalmatrix * normalize(cross(a,b)));     emitvertex();     endprimitive(); } 

but after modification normals of surface still change camera angle, shown below in screenshot. dont know if normal calculation wrong or normal matrix calculation done wrong or maybe both...

enter image description here

edit3: 1st issue solution: changing normal calculation in gs transformednormal = normalize(normalmatrix * normalize(cross(a,b))); transformednormal = normalize(cross(a,b)); seems solve problem. omitting normalmatrix calculation fixed issue , normals dont change viewing angle.

if missed important/relevant information, please notify me in comment.

  1. depth buffer precision

    depth buffer stored 16 or 24 bit buffer. hw implementation of float normalized specific range. can see there few bits mantissa/exponent in comparison standard float.

    if oversimplify things , assume integer values instead float 16 bit buffer got 2^16 values. if got znear=0.1 , zfar=50000.0 got 65535 values on full range. depth valued nonlinear got higher accuracy near znear , much much lower near zfar plane depth values jump higher , higher step causing accuracy problems 2 polygons near.

    i empirically got setting planes in views:

    • (zfar-znear)/desired_accuracy_step > 0.3*(2^n)

    where n depth buffer bit-width , desired_accuracy_step wanted resolution in z axis need. saw exchanged znear value.


Comments

Popular posts from this blog

c++ - llvm function pass ReplaceInstWithInst malloc -

Cross-Compiling Linux Kernel for Raspberry Pi - ${CCPREFIX}gcc -v does not work -

java.lang.NoClassDefFoundError When Creating New Android Project -