How to make a 2d shader working with ParallaxBackground node in Godot? - fragment-shader

In my game I want to make a scrolling background with moving stars. I am using ParallaxBackground node with ParallaxLayer as a child, and the later has TextureRect child that display a 2d shader for the stars.
Nodes hierarchy:
ParallaxBackground -> StarsLayer -> Stars
Stars is the TextureRect and its rect_size equals the project window size.
Here is the 2d shader that it uses:
shader_type canvas_item;
uniform vec4 bg_color: hint_color;
float rand(vec2 st) {
return fract(sin(dot(st.xy, vec2(12.9898,78.233))) * 43758.5453123);
}
void fragment() {
float size = 100.0;
float prob = 0.9;
vec2 pos = floor(1.0 / size * FRAGCOORD.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + 0.2 * sin(TIME * 8.0 + (starValue - prob) / (1.0 - prob) * 45.0);
color = 1.0 - distance(FRAGCOORD.xy, center) / (0.5 * size);
color = color * t / (abs(FRAGCOORD.y - center.y)) * t / (abs(FRAGCOORD.x - center.x));
}
else if (rand(SCREEN_UV.xy / 20.0) > 0.996)
{
float r = rand(SCREEN_UV.xy);
color = r * (0.85 * sin(TIME * (r * 5.0) + 720.0 * r) + 0.95);
}
COLOR = vec4(vec3(color),1.0) + bg_color;
}
Here is ParallaxBackground script:
extends ParallaxBackground
onready var stars_layer = $StarsLayer
var bg_offset = 0.0
func _ready():
stars_layer.motion_mirroring = Vector2(0, Helpers.WINDOW_SIZE.y)
func _process(delta):
bg_offset += 30 * delta
scroll_offset = Vector2(0, bg_offset)
The problem is that the stars are being showed but not moving at all.

Use motion_offset instead of scroll_offset
func _process(delta):
motion_offset += 30 * delta

Related

Change Shape of Heatmap Element from Circle to Square?

I'm a novice in Openlayer6. Who can tell me how to change the rendered elements of the Heatmap.js from a circle to a square? Thanks a lot!
This is currently not configurable, though you can override the createRenderer method of the Heatmap layer to do this (not supported by the api, so it may break in the future).
Here is a working example: https://codesandbox.io/s/heatmap-earthquakes-squares-hdrbs?file=/main.js
These are the needed changes from the orignal function:
diff --git a/src/ol/layer/Heatmap.js b/src/ol/layer/Heatmap.js
index c3e3306c8..2873bf184 100644
--- a/src/ol/layer/Heatmap.js
+++ b/src/ol/layer/Heatmap.js
## -222,8 +222,8 ## class Heatmap extends VectorLayer {
void main(void) {
vec2 texCoord = v_texCoord * 2.0 - vec2(1.0, 1.0);
- float sqRadius = texCoord.x * texCoord.x + texCoord.y * texCoord.y;
- float value = (1.0 - sqrt(sqRadius)) * u_blurSlope;
+ float distance = max(abs(texCoord.x), abs(texCoord.y));
+ float value = (1.0 - distance) * u_blurSlope;
float alpha = smoothstep(0.0, 1.0, value) * v_weight;
gl_FragColor = vec4(alpha, alpha, alpha, alpha);
}`,
## -263,8 +263,8 ## class Heatmap extends VectorLayer {
void main(void) {
vec2 texCoord = v_texCoord * 2.0 - vec2(1.0, 1.0);
- float sqRadius = texCoord.x * texCoord.x + texCoord.y * texCoord.y;
- float value = (1.0 - sqrt(sqRadius)) * u_blurSlope;
+ float distance = max(abs(texCoord.x), abs(texCoord.y));
+ float value = (1.0 - distance) * u_blurSlope;
float alpha = smoothstep(0.0, 1.0, value) * v_weight;
if (alpha < 0.05) {
discard;

Look-at quaternion using up vector

I have a camera (in a custom 3D engine) that accepts a quaternion for the rotation transform. I have two 3D points representing a camera and an object to look at. I want to calculate the quaternion that looks from the camera to the object, while respecting the world up axis.
This question asks for the same thing without the "up" vector. All three answers result in the camera pointing in the correct direction, but rolling (as in yaw/pitch/roll; imagine leaning your head onto your ear while looking at something).
I can calculate an orthonormal basis of vectors that match the desired coordinate system by:
lookAt = normalize(target - camera)
sideaxis = cross(lookAt, worldUp)
rotatedup = cross(sideaxis, lookAt)
How can I create a quaternion from those three vectors? This question asks for the same thing...but unfortunately the only and accepted answer says ~"let's assume you don't care about roll", and then goes about ignoring the up axis. I do care about roll. I don't want to ignore the up axis.
A previous answer has given a valid solution using angles. This answer will present an alternative method.
The orthonormal basis vectors, renaming them F = lookAt, R = sideaxis, U = rotatedup, directly form the columns of the 3x3 rotation matrix which is equivalent to your desired quaternion:
Multiplication with a vector is equivalent to using said vector's components as the coordinates in the camera's basis.
A 3x3 rotation matrix can be converted into a quaternion without conversion to angles / use of costly trigonometric functions. Below is a numerically stable C++ snippet which does this, returning a normalized quaternion:
inline void CalculateRotation( Quaternion& q ) const {
float trace = a[0][0] + a[1][1] + a[2][2];
if( trace > 0 ) {
float s = 0.5f / sqrtf(trace + 1.0f);
q.w = 0.25f / s;
q.x = ( a[2][1] - a[1][2] ) * s;
q.y = ( a[0][2] - a[2][0] ) * s;
q.z = ( a[1][0] - a[0][1] ) * s;
} else {
if ( a[0][0] > a[1][1] && a[0][0] > a[2][2] ) {
float s = 2.0f * sqrtf( 1.0f + a[0][0] - a[1][1] - a[2][2]);
q.w = (a[2][1] - a[1][2] ) / s;
q.x = 0.25f * s;
q.y = (a[0][1] + a[1][0] ) / s;
q.z = (a[0][2] + a[2][0] ) / s;
} else if (a[1][1] > a[2][2]) {
float s = 2.0f * sqrtf( 1.0f + a[1][1] - a[0][0] - a[2][2]);
q.w = (a[0][2] - a[2][0] ) / s;
q.x = (a[0][1] + a[1][0] ) / s;
q.y = 0.25f * s;
q.z = (a[1][2] + a[2][1] ) / s;
} else {
float s = 2.0f * sqrtf( 1.0f + a[2][2] - a[0][0] - a[1][1] );
q.w = (a[1][0] - a[0][1] ) / s;
q.x = (a[0][2] + a[2][0] ) / s;
q.y = (a[1][2] + a[2][1] ) / s;
q.z = 0.25f * s;
}
}
}
Source: http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToQuaternion
Converting this to suit your situation is of course just a matter of swapping the matrix elements with the corresponding vector components:
// your code from before
F = normalize(target - camera); // lookAt
R = normalize(cross(F, worldUp)); // sideaxis
U = cross(R, F); // rotatedup
// note that R needed to be re-normalized
// since F and worldUp are not necessary perpendicular
// so must remove the sin(angle) factor of the cross-product
// same not true for U because dot(R, F) = 0
// adapted source
Quaternion q;
double trace = R.x + U.y + F.z;
if (trace > 0.0) {
double s = 0.5 / sqrt(trace + 1.0);
q.w = 0.25 / s;
q.x = (U.z - F.y) * s;
q.y = (F.x - R.z) * s;
q.z = (R.y - U.x) * s;
} else {
if (R.x > U.y && R.x > F.z) {
double s = 2.0 * sqrt(1.0 + R.x - U.y - F.z);
q.w = (U.z - F.y) / s;
q.x = 0.25 * s;
q.y = (U.x + R.y) / s;
q.z = (F.x + R.z) / s;
} else if (U.y > F.z) {
double s = 2.0 * sqrt(1.0 + U.y - R.x - F.z);
q.w = (F.x - R.z) / s;
q.x = (U.x + R.y) / s;
q.y = 0.25 * s;
q.z = (F.y + U.z) / s;
} else {
double s = 2.0 * sqrt(1.0 + F.z - R.x - U.y);
q.w = (R.y - U.x) / s;
q.x = (F.x + R.z) / s;
q.y = (F.y + U.z) / s;
q.z = 0.25 * s;
}
}
(And needless to say swap y and z if you're using OpenGL.)
Assume you initially have three ortonormal vectors: worldUp, worldFront and worldSide, and lets use your equations for lookAt, sideAxis and rotatedUp. The worldSide vector will not be necessary to achieve the result.
Break the operation in two. First, rotate around worldUp. Then rotate around sideAxis, which will now actually be parallel to the rotated worldSide.
Axis1 = worldUp
Angle1 = (see below)
Axis2 = cross(lookAt, worldUp) = sideAxis
Angle2 = (see below)
Each of these rotations correspond to a quaternion using:
Q = cos(Angle/2) + i * Axis_x * sin(Angle/2) + j * Axis_y * sin(Angle/2) + k * Axis_z * sin(Angle/2)
Multiply both Q1 and Q2 and you get the desired quaternion.
Details for the angles:
Let P(worldUp) be the projection matrix on the worldUp direction, i.e., P(worldUp).v = cos(worldUp,v).worldUp or using kets and bras, P(worldUp) = |worldUp >< worldUp|. Let I be the identity matrix.
Project lookAt in the plane perpendicular to worldUp and normalize it.
tmp1 = (I - P(worldUp)).lookAt
n1 = normalize(tmp1)
Angle1 = arccos(dot(worldFront,n1))
Angle2 = arccos(dot(lookAt,n1))
EDIT1:
Notice that there is no need to compute transcendental functions. Since the dot product of a pair of normalized vectors is the cosine of an angle and assuming that cos(t) = x, we have the trigonometric identities:
cos(t/2) = sqrt((1 + x)/2)
sin(t/2) = sqrt((1 - x)/2)
If somebody search for C# version with handling every matrix edge cases (not input edge cases!), here it is:
public static SoftQuaternion LookRotation(SoftVector3 forward, SoftVector3 up)
{
forward = SoftVector3.Normalize(forward);
// First matrix column
SoftVector3 sideAxis = SoftVector3.Normalize(SoftVector3.Cross(up, forward));
// Second matrix column
SoftVector3 rotatedUp = SoftVector3.Cross(forward, sideAxis);
// Third matrix column
SoftVector3 lookAt = forward;
// Sums of matrix main diagonal elements
SoftFloat trace1 = SoftFloat.One + sideAxis.X - rotatedUp.Y - lookAt.Z;
SoftFloat trace2 = SoftFloat.One - sideAxis.X + rotatedUp.Y - lookAt.Z;
SoftFloat trace3 = SoftFloat.One - sideAxis.X - rotatedUp.Y + lookAt.Z;
// If orthonormal vectors forms identity matrix, then return identity rotation
if (trace1 + trace2 + trace3 < SoftMath.CalculationsEpsilon)
{
return Identity;
}
// Choose largest diagonal
if (trace1 + SoftMath.CalculationsEpsilon > trace2 && trace1 + SoftMath.CalculationsEpsilon > trace3)
{
SoftFloat s = SoftMath.Sqrt(trace1) * (SoftFloat)2.0f;
return new SoftQuaternion(
(SoftFloat)0.25f * s,
(rotatedUp.X + sideAxis.Y) / s,
(lookAt.X + sideAxis.Z) / s,
(rotatedUp.Z - lookAt.Y) / s);
}
else if (trace2 + SoftMath.CalculationsEpsilon > trace1 && trace2 + SoftMath.CalculationsEpsilon > trace3)
{
SoftFloat s = SoftMath.Sqrt(trace2) * (SoftFloat)2.0f;
return new SoftQuaternion(
(rotatedUp.X + sideAxis.Y) / s,
(SoftFloat)0.25f * s,
(lookAt.Y + rotatedUp.Z) / s,
(lookAt.X - sideAxis.Z) / s);
}
else
{
SoftFloat s = SoftMath.Sqrt(trace3) * (SoftFloat)2.0f;
return new SoftQuaternion(
(lookAt.X + sideAxis.Z) / s,
(lookAt.Y + rotatedUp.Z) / s,
(SoftFloat)0.25f * s,
(sideAxis.Y - rotatedUp.X) / s);
}
}
This realization based on deeper understanding of this conversation, and was tested for many edge case scenarios.
P.S.
Quaternion's constructor is (x, y, z, w)
SoftFloat is software float type, so you can easyly change it to built-in float if needed
For full edge case safe realization (including input) check this repo.
lookAt
sideaxis
rotatedup
If you normalize this 3 vectors, it is a components of rotation matrix 3x3. So just convert this rotation matrix to quaternion.

Path Tracing - Generate Camera Rays with a Left Handed coordinate system

Been having some issues implementing a camera for my renderer. As the question states,I would like to know the necessary steps to generate such a camera.With field of view and aspect ratio included.Its important that the Coordinate system be left handed such that -z pushes the camera away from the screen(as I understand it).I have tried looking online but most of the implementations are incomplete or have failed me.Any help is appreciated.Thank You.
I had trouble with this and took a long time to figure out. Here is the code for camera class.
#ifndef CAMERA_H_
#define CAMERA_H_
#include "common.h"
struct Camera {
Vec3fa position, direction;
float fovDist, aspectRatio;
double imgWidth, imgHeight;
Mat4 camMatrix;
Camera(Vec3fa pos, Vec3fa cRot, Vec3fa cDir, float cfov, int width, int height) {
position = pos;
aspectRatio = width / (float)height;
imgWidth = width;
imgHeight = height;
Vec3fa angle = Vec3fa(cRot.x, cRot.y, -cRot.z);
camMatrix.setRotationRadians(angle * M_PI / 180.0f);
direction = Vec3fa(0.0f, 0.0f, -1.0f);
camMatrix.rotateVect(direction);
fovDist = 2.0f * tan(M_PI * 0.5f * cfov / 180.0);
}
Vec3fa getRayDirection(float x, float y) {
Vec3fa delta = Vec3fa((x-0.5f) * fovDist * aspectRatio, (y-0.5f) * fovDist, 0.0f);
camMatrix.rotateVect(delta);
return (direction + delta);
}
};
#endif
Incase if you need the rotateVect() code in the Mat4 class
void Mat4::rotateVect(Vector3& vect) const
{
Vector3 tmp = vect;
vect.x = tmp.x * (*this)[0] + tmp.y * (*this)[4] + tmp.z * (*this)[8];
vect.y = tmp.x * (*this)[1] + tmp.y * (*this)[5] + tmp.z * (*this)[9];
vect.z = tmp.x * (*this)[2] + tmp.y * (*this)[6] + tmp.z * (*this)[10];
}
Here is our setRotationRadians code
void Mat4::setRotationRadians(Vector3 rotation)
{
const float cr = cos(rotation.x);
const float sr = sin(rotation.x);
const float cp = cos(rotation.y);
const float sp = sin(rotation.y);
const float cy = cos(rotation.z);
const float sy = sin(rotation.z);
(*this)[0] = (cp * cy);
(*this)[1] = (cp * sy);
(*this)[2] = (-sp);
const float srsp = sr * sp;
const float crsp = cr * sp;
(*this)[4] = (srsp * cy - cr * sy);
(*this)[5] = (srsp * sy + cr * cy);
(*this)[6] = (sr * cp);
(*this)[8] = (crsp * cy + sr * sy);
(*this)[9] = (crsp * sy - sr * cy);
(*this)[10] = (cr * cp);
}

What is the most optimized way of creating a ray tracer?

Currently, I am working with a ray tracer that takes an iterative approach towards developing the scenes. My goal is to turn it into a recursive ray tracer.
At the moment, I have a ray tracer defined to do the following operation to create the bitmap it is stored in:
int WIDTH = 640;
int HEIGHT = 640;
BMP Image(WIDTH, HEIGHT); // create new bitmap
// Slightly shoot rays left of right camera direction
double xAMT, yAMT;
*/
Color blue(0.1, 0.61, 0.76, 0);
for (int x = 0; x < WIDTH; x++) {
for (int y = 0; y < HEIGHT; y++) {
if (WIDTH > HEIGHT) {
xAMT = ((x + 0.5) / WIDTH) * aspectRatio - (((WIDTH - HEIGHT) / (double)HEIGHT) / 2);
yAMT = ((HEIGHT - y) + 0.5) / HEIGHT;
}
else if (HEIGHT > WIDTH) {
xAMT = (x + 0.5) / WIDTH;
yAMT = (((HEIGHT - y) + 0.5) / HEIGHT) / aspectRatio - (((HEIGHT - WIDTH) / (double)WIDTH) / 2);
}
else {
xAMT = (x + 0.5) / WIDTH;
yAMT = ((HEIGHT - y) + 0.5) / HEIGHT;
}
..... // calculate intersections, shading, reflectiveness.... etc
Image.setPixel(x, y, blue); // this is here just as an example
}
}
Is there another approach to calculating the reflective and refractive child rays outside the double for-loop?
Are the for-loops necessary? // yes because of the bitmap?
What approaches can be taken to minimize/optimize an iterative ray tracer?

Determining Midpoint Between 2 Coordinates

I am trying to determine the midpoint between two locations in an MKMapView. I am following the method outlined here (and here) and rewrote it in Objective-C, but the map is being centered somewhere northeast of Baffin Island, which is no where near the two points.
My method based on the java method linked above:
+(CLLocationCoordinate2D)findCenterPoint:(CLLocationCoordinate2D)_lo1 :(CLLocationCoordinate2D)_loc2 {
CLLocationCoordinate2D center;
double lon1 = _lo1.longitude * M_PI / 180;
double lon2 = _loc2.longitude * M_PI / 100;
double lat1 = _lo1.latitude * M_PI / 180;
double lat2 = _loc2.latitude * M_PI / 100;
double dLon = lon2 - lon1;
double x = cos(lat2) * cos(dLon);
double y = cos(lat2) * sin(dLon);
double lat3 = atan2( sin(lat1) + sin(lat2), sqrt((cos(lat1) + x) * (cos(lat1) + x) + y * y) );
double lon3 = lon1 + atan2(y, cos(lat1) + x);
center.latitude = lat3 * 180 / M_PI;
center.longitude = lon3 * 180 / M_PI;
return center;
}
The 2 parameters have the following data:
_loc1:
latitude = 45.4959839
longitude = -73.67826455
_loc2:
latitude = 45.482889
longitude = -73.57522299
The above are correctly place on the map (in and around Montreal). I am trying to center the map in the midpoint between the 2, yet my method return the following:
latitude = 65.29055
longitude = -82.55425
which somewhere in the arctic, when it should be around 500 miles south.
In case someone need code in Swift, I have written library function in Swift to calculate the midpoint between MULTIPLE coordinates:
// /** Degrees to Radian **/
class func degreeToRadian(angle:CLLocationDegrees) -> CGFloat {
return ( (CGFloat(angle)) / 180.0 * CGFloat(M_PI) )
}
// /** Radians to Degrees **/
class func radianToDegree(radian:CGFloat) -> CLLocationDegrees {
return CLLocationDegrees( radian * CGFloat(180.0 / M_PI) )
}
class func middlePointOfListMarkers(listCoords: [CLLocationCoordinate2D]) -> CLLocationCoordinate2D {
var x = 0.0 as CGFloat
var y = 0.0 as CGFloat
var z = 0.0 as CGFloat
for coordinate in listCoords{
var lat:CGFloat = degreeToRadian(coordinate.latitude)
var lon:CGFloat = degreeToRadian(coordinate.longitude)
x = x + cos(lat) * cos(lon)
y = y + cos(lat) * sin(lon)
z = z + sin(lat)
}
x = x/CGFloat(listCoords.count)
y = y/CGFloat(listCoords.count)
z = z/CGFloat(listCoords.count)
var resultLon: CGFloat = atan2(y, x)
var resultHyp: CGFloat = sqrt(x*x+y*y)
var resultLat:CGFloat = atan2(z, resultHyp)
var newLat = radianToDegree(resultLat)
var newLon = radianToDegree(resultLon)
var result:CLLocationCoordinate2D = CLLocationCoordinate2D(latitude: newLat, longitude: newLon)
return result
}
Detailed answer can be found here
Updated For Swift 5
func geographicMidpoint(betweenCoordinates coordinates: [CLLocationCoordinate2D]) -> CLLocationCoordinate2D {
guard coordinates.count > 1 else {
return coordinates.first ?? // return the only coordinate
CLLocationCoordinate2D(latitude: 0, longitude: 0) // return null island if no coordinates were given
}
var x = Double(0)
var y = Double(0)
var z = Double(0)
for coordinate in coordinates {
let lat = coordinate.latitude.toRadians()
let lon = coordinate.longitude.toRadians()
x += cos(lat) * cos(lon)
y += cos(lat) * sin(lon)
z += sin(lat)
}
x /= Double(coordinates.count)
y /= Double(coordinates.count)
z /= Double(coordinates.count)
let lon = atan2(y, x)
let hyp = sqrt(x * x + y * y)
let lat = atan2(z, hyp)
return CLLocationCoordinate2D(latitude: lat.toDegrees(), longitude: lon.toDegrees())
}
}
Just a hunch, but I noticed your lon2 and lat2 variables are being computed with M_PI/100 and not M_PI/180.
double lon1 = _lo1.longitude * M_PI / 180;
double lon2 = _loc2.longitude * M_PI / 100;
double lat1 = _lo1.latitude * M_PI / 180;
double lat2 = _loc2.latitude * M_PI / 100;
Changing those to 180 might help you out a bit.
For swift users, corrected variant as #dinjas suggest
import Foundation
import MapKit
extension CLLocationCoordinate2D {
// MARK: CLLocationCoordinate2D+MidPoint
func middleLocationWith(location:CLLocationCoordinate2D) -> CLLocationCoordinate2D {
let lon1 = longitude * M_PI / 180
let lon2 = location.longitude * M_PI / 180
let lat1 = latitude * M_PI / 180
let lat2 = location.latitude * M_PI / 180
let dLon = lon2 - lon1
let x = cos(lat2) * cos(dLon)
let y = cos(lat2) * sin(dLon)
let lat3 = atan2( sin(lat1) + sin(lat2), sqrt((cos(lat1) + x) * (cos(lat1) + x) + y * y) )
let lon3 = lon1 + atan2(y, cos(lat1) + x)
let center:CLLocationCoordinate2D = CLLocationCoordinate2DMake(lat3 * 180 / M_PI, lon3 * 180 / M_PI)
return center
}
}
It's important to say that the formula the OP used to calculate geographic midpoint is based on this formula which explains the cos/sin/sqrt calculation.
This formula will give you the geographic midpoint for any long distance including the four quarters and the prime meridian.
But, if your calculation is for short-range around 1 Kilometer, using a simple average will produce the same midpoint results.
i.e:
let firstPoint = CLLocation(....)
let secondPoint = CLLocation(....)
let midPointLat = (firstPoint.coordinate.latitude + secondPoint.coordinate.latitude) / 2
let midPointLong = (firstPoint.coordinate.longitude + secondPoint.coordinate.longitude) / 2
You can actually use it for 10km but expect a deviation - if you only need an estimation for a short range midpoint with a fast solution it will be sufficient.
I think you are over thinking it a bit. Just do:
float lon3 = ((lon1 + lon2) / 2)
float lat3 = ((lat1 + lat2) / 2)
lat3 and lon3 will be the center point.