How To Load Lights Correctly From Assimp? - blender

I have successfully loaded lights from a collada file that I exported with blender using assimp. I managed to get all the lights data. However I just cant get the attenuation right.
I know the attenuation equation however the numbers that I am loading from the model is clearly not correct.
I made multiple scenes with different lights at different strengths in blender, then loaded them using assimp and all lights had the same attenuetion even though they had different strengths. The constant value is 1, the linear is always 0, and the quadratic is always around 0.0015. And this is the same numbers for all lights that I load. Also the lights color is not in a value from 0-1 and instead is some large number, for example (500, 700, 800), which results in a incredible bright lighting, making my whole scene white from the brightness. I tried to convert the colors to a 0 to 1 range but that still did not work.
So my question is how can I get the correct attenuation for a light source?
Do I have to do any sort of extra calculations on the attenuetion variables? As they are the same vales for light source.

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

Measuring sizes of before/after images in Photoshop

Perhaps my mind isn't mathematically competent enough to do this, but here it goes:
I am using Photoshop. I have 2 images taken from different heights. Both images have the same object in it (so the size of this object remains the same) but I am trying to resize both images so that this object is the same pixel size. That way I can properly measure the difference between other objects in the images with the proper ratio.
My end goal is to measure the differences of scars healing (before and after) using a same-size object in both images as a baseline.
To measure the difference in the photo, I have been counting pixels using the histogram feature:
Even though i changed the pixel width and height to roughly the same size, the 2 images have a drastically different number of pixels. So comparing the red or white from the before to the after won't make sense until I can get these to match.
Can anyone point me in the right direction here? How can I compare apples to apples here?
So went a different route here in case anyone was trying wondering what I did.
Rather than change the size of the images, just calculated the increase manually separately.

how to get faster rendering of 400+ polygons with SFML

I'm making a basic simulation of moving planets and gravitational pull between them, and displaying the gravity with a big field of green vectors pointing in the direction gravity is pulling them and magnitude of the strength of the pull.
This means I have 400 + lines, which are really rectangles with a rotation, being redrawn each frame, and this is killing my frame-rate. Is there anyway to optimize this with other than making less lines? How do 2d OpenGL games today achieve such high frame-rates even with many complex polygons/colors?
EDIT:
SFML does the actual rendering each frame, but the way I create my lines is by making a rectangle-like sf::Shape. The generation function takes a width, and sets point 1 as (0, width), point 2 as (0, -width), point 3 as (LineLength, -width), and point 4 (LineLength, width). This forms a rectangle which extends along the positive x-axis. Finally I rotate the rectangle around (0,0) to get it to the right orientation, and set the shapes position to be wherever the start of the line is supposed to be.
How do 2d OpenGL games today achieve such high frame-rates even with many complex polygons/colors?
I imagine by not drawing 400+ 4-vertex objects that are each rotated and scaled with a matrix.
If you want to draw a lot of these things, you're going to have to stop relying on SFML's drawing classes. That introduces a lot of overhead. You're going to have to do it the right way: by drawing lines.
If you insist on each line having a separate width, then you can't use GL_LINES. You must instead compute the four positions of the "line" and stick them in a buffer object. Then, you draw them with a single GL_QUADS call. You will need to use proper buffer object streaming techniques to make this work reasonably fast.
Large batches and VBOs. Also double-check how much time you're spending in your simulation update code.
Quick check: If you have a glBegin() anywhere near your main render loop you are probably Doing It Wrong.
Calculate all your vertex positions, then stream them into the GPU via GL_STREAM_DRAW. If you can tolerate some latency use two VBOs and double-buffer.

On-the-fly Terrain Generation Based on An Existing Terrain

This question is very similar to that posed here.
My problem is that I have a map, something like this:
This map is made using 2D Perlin noise, and then running through the created heightmap assigning types and color values to each element in the terrain based on the height or the slope of the corresponding element, so pretty standard. The map array is two dimensional and the exact dimensions of the screen size (pixel-per-pixel), so at 1200 by 800 generation takes about 2 seconds on my rig.
Now zooming in on the highlighted rectangle:
Obviously with increased size comes lost detail. And herein lies the problem. I want to create additional detail on the fly, and then write it to disk as the player moves around (the player would simply be a dot restricted to movement along the grid). I see two approaches for doing this, and the first one that came to mind I quickly implemented:
This is a zoomed-in view of a new biased local terrain created from a sampled element of the old terrain, which is highlighted by the yellow grid space (to the left of center) in the previous image. However this system would require a great deal of modification, as, for example, if you move one unit left and up of the yellow grid space, onto the beach tile, the terrain changes completely:
So for that to work properly you'd need to do an excessive amount of, I guess the word would be interpolation, to create a smooth transition as the player moved the 40 or so grid-spaces in the local world required to reach the next tile over in the over world. That seems complicated and very inelegant.
The second approach would be to break up the grid of the original map into smaller bits, maybe dividing each square by 4? I haven't implemented this and I'm not sure how I would in a way that would actually increase detail, but I think that would probably end up being the best solution.
Any ideas on how I could approach this? Keep in mind it has to be local and on-the-fly. Just increasing the resolution of the map is something I want to avoid at all costs.
Rewrite your Perlin noise to be a function of position. Then you can increase the octaves (and thus the detail level) and resample the area at a higher resolution.

Java 3d: Unable to get Shape3D to be affected by lights

I am attempting to get a custom Shape3D to be affected by a DirectedLight in java 3D, but nothing I do seems to work.
The Shape has a geometry that is an IndexedQuadArray, with the NORMAL flag set and applied, ensuring the normal vectors are applied to the correct vertices - using indexed vectors
I have given the Appearance a Material (both with specified colors and shininess, and without)
I have also put the light on the same BranchGroup as the Shape, but it still does not work.
In fact, when I add in the normals to the shape, the object appears to disappear - without them, it's flat shaded, so that all faces are the same shade.
I can only think that I am forgetting to include something ridiculously simple, or have done something wrong.
To test the lights were actually, I put in a Sphere beside the Shape, and the sphere was affected and lit correctly, but the shape still wasn't. Both were on the same BranchGroup
[Small oddity too - if I translate the sphere, it vanishes if I move it greater than 31 in any direction... [my view is set about 700 back as I'm dealing with objects of sizes up to 600 in width]
Edit: found this in the official tutorials that is probably related
A visual object properly specified for shading (i.e., one with a Material object) in a live scene graph but outside the influencing bounds of all light source objects renders black.
The light's setInfluencingBounds() was not set correctly, so that the shapes in the scene were not being included in the bounds.
This was corrected by setting a BoundingBox to encompass the entire area, and assigning that into the influencing bounds