React native razorpay ui opening in transparent background - react-native

I am using react-native-customui, when i am opening razorpay using below
const options = {
description: 'Credits towards consultation',
currency: 'INR',
key_id: "rzr_id",
amount: 100,
email: 'void#razorpay.com',
contact: '9999999123',
method: 'card',
'card[name]': 'test,
'card[number]': 4111111111111111,
'card[expiry_month]': 12,
'card[expiry_year]': 23,
'card[cvv]': 123,
};
Razorpay.open(options).then((res: any) => {// handle success}).catch((error: any) => {// handle failure});
Razorpay ui opening with transparent background, It is opening in android with normal white background.

Related

react native IOS can not access microphone when lockscreen + app killed

I am building Calling app, includes:
1.PushKit VOIP to show incomming call
2.React-native-callkeep to handle answer/end call
3. React-native-webrtc to make the call
The problem is:
1.With app state active/background. The call working normally
2.Only with case app locked + app not running. I can not see the microphone in top of screen, then I think I can not access into the microphone(tested with audio call)
About my code: I am getMediaDivices in useEffect() like this:
useEffect(() => {
....
getMediaStream()
return () => {
mounted.current = false
....
}
}, [])
const getMediaStream = async () => {
if (!localMediaStream) {
let isFront = true
let stream = await mediaDevices.getUserMedia({
audio: {
echoCancellation: true,
noiseSuppression: true,
autoGainControl: true,
googEchoCancellation: true,
googAutoGainControl: true,
googNoiseSuppression: true,
googHighpassFilter: true,
googTypingNoiseDetection: true,
googNoiseReduction: true
},
video: isVideo ? {
width: { min: 480, max: 1280},
height: { min: 320, max: 720 },
// vb: true,
frameRate: 25,
facingMode: (isFront ? 'user' : 'environment'),
} : false
})
setlocalMediaStream({
publisher: {
id: currentUserName,
userId: masterInfo.user.id,
displayName: masterInfo.user.name
},
stream: stream,
})
await initJanus(stream)
}
}
I dont know why the green dot not appear in case 3, can someone help?

Freeze redirect in cypress

I would like to test a GA4 product click event. For this I write the required data in the data layer and I want to see if the correct data is in the data layer. However, when I click on the product in Cypress, the redirect is faster than the test can read the data layer. Is there any way I can pause or freeze the redirect?
Here the expectet data in the Datalayer:
select_item: {
event: 'select_item',
ecommerce: {
item_name: 'Artiklename',
item_id: '000000',
price: 1.19,
currency: 'EUR',
item_brand: 'Brand',
item_category: 'category',
item_category2: 'category2',
item_category3: 'category3',
item_category4: 'category4',
index: 1,
quantity: 1,
item_list_name: "List Name"
},
},
Here the actual Test:
context('Google Analytics 4: should track select_item event', function () {
it('should track select item on search page', function () {
cy.getTrackingData('expressShippingArticle', 'select_item').then(
(expectedSelectItemEvent) => {
// act
cy.visitWithBasicAuth(
routes.category.resultList(
'000/SomeArticle'
)
)
//assert
cy.getSpecificEventFromDataLayer('select_item').then(
(event) => {
cy.wrap(event).should('not.exist')
}
)
// act
cy.get(selectors.resultList.productInResultList)
.first()
.click()
cy.getSpecificEventFromDataLayer('select_item').then(
(actualSelectItemEvent) => {
cy.wrap(actualSelectItemEvent, { timeout: 0 }).should(
spok(expectedSelectItemEvent)
)
}
)
}
)
})
})
Not sure if this works with GA4, but if you want to delay a network request/response add an intercept with delay option
cy.intercept(redirect-url, (req) => {
req.reply(res => {
res.setDelay(2000)
})
})
// act
cy.get(selectors.resultList.productInResultList).first().click()
...

The Route line is not showing in android by using FeatureCollection

I need your help in little bit query,
i'm trying to render the multiple polyline on a single map,it look like as it (IOS),
it perfectly fine work fine in IOS but not work in android, so my code Snippet it,
import MapboxGL from '#react-native-mapbox-gl/maps';
const MapbBoxDirection = ({shape}: any) => {
const sp = returnOption(shape);
const Poliline = React.useMemo(() => {
return (
<MapboxGL.Animated.ShapeSource
id="routeSource"
buffer={512}
tolerance={1}
lineMetrics={false}
clusterRadius={10}
shape={sp}>
<MapboxGL.Animated.LineLayer
id="routeFill"
style={{
lineColor: '#ff8109',
lineWidth: 10,
lineRoundLimit: 12,
lineCap: 'round',
lineOpacity: 1.84,
}}
/>
</MapboxGL.Animated.ShapeSource>
);
}, [shape, sp]);
return Poliline;
};
import {featureCollection, lineString as makeLineString} from '#turf/helpers';
///// Make Json
export const returnOption = (res): any => {
const feature = res.map((item: any, index: any) => {
if (item[`Route${index}`]?.length > 2) {
return makeLineString(item[`Route${index}`]);
}
});
const featureCollectiondata = featureCollection(feature);
return featureCollectiondata;
};
it's work fine in IOS but not work in android,
i'm also trying to make a json manually without truf helper, i'm facing same problem.
So would you please help me How i can resolve it for android,
one more thing is SINGLE route work fine for both platform so when i'm trying to use featurecollection json it create problem,
Please I'm very Thankful to you,
After a lot a effort i got the Solution Sometime undefined and null is generate default Therefor route line not render on android, but ios it will handle it by default So
export const returnOption = async (res: any, setShape: any) => {
const feature = await Promise.all(
res.map(async (item: any, index: any): Promise<any> => {
if (item[`Route${index}`]?.length > 1) {
// return makeLineString(item[`Route${index}`]);
return {
type: 'Feature',
properties: {
prop0: 'value0',
prop1: 0.0,
},
geometry: {
type: 'LineString',
coordinates: item[`Route${index}`],
},
};
}
}),
);
const RemoveUndefined = feature?.filter(item => item !== undefined);
setShape({
type: 'FeatureCollection',
features: RemoveUndefined,
});
};
finally I have achieve the solution.

ReferenceError: google is not defined | Google Charts in pdfkit

I was trying to render graph in pdf generated using pdfkit. I found this solution https://quickchart.io/documentation/google-charts-image-server/#example
const GoogleChartsNode = require('google-charts-node');
// Define your chart drawing function
function drawChart() {
const data = google.visualization.arrayToDataTable([
['City', '2010 Population',],
['New York City, NY', 8175000],
['Los Angeles, CA', 3792000],
['Chicago, IL', 2695000],
['Houston, TX', 2099000],
['Philadelphia, PA', 1526000]
]);
const options = {
title: 'Population of Largest U.S. Cities',
chartArea: {width: '50%'},
hAxis: {
title: 'Total Population',
minValue: 0
},
vAxis: {
title: 'City'
}
};
const chart = new google.visualization.BarChart(container);
chart.draw(data, options);
}
// Render the chart to image
const image = await GoogleChartsNode.render(drawChart);
This works fine and I got a generated graph in my pdf. So tried to give dynamic data into new function I written
function drawtIntelligenceBarchar(intelligence) {
console.log(intelligence)
try{
const data = google.visualization.arrayToDataTable([
['Intelligence', 'Intelligence Level',],
["Linguistic", intelligence["Linguistic"]],
["Logical", intelligence["Logical"]],
["Musical", intelligence["Musical"]],
["Visual-Spatial", intelligence["Visual-Spatial"]],
["Kinesthetic", intelligence["Kinesthetic"]],
["Intra-Personal", intelligence["Intra-Personal"]],
["Inter-Personal", intelligence["Inter-Personal"]],
["Naturalistic", intelligence["Naturalistic"]],
["Existential", intelligence["Existential"]]
]);
const options = {
title: 'Intelligence Graph',
chartArea: {width: '70%'},
hAxis: {
title: 'AVERAGE GOOD VERY GOOD EXCELLENT',
minValue: 0,
maxValue: 100,
},
vAxis: {
title: 'INTELLIGENCE TYPE'
}
};
const chart = new google.visualization.BarChart(container);
chart.draw(data, options);
} catch(error){
console.log(error);
}
}
const intelligenceGraph = await GoogleChartsNode.render(drawtIntelligenceBarchar(intelligence));
Then I got this error
ReferenceError: google is not defined
It will be awesome if you can help me on this.
I faced a similar issue a while ago, the thing here is that you have to consider that google charts is a library that is loaded when the page is rendered, meaning that in order to generate a pdf it should be already there before generating it.
The approach you can use is to use a headless browser to emulate that the page is open and then the dependencies are loaded so when you send the HTML to pdfkit it will contain everything you need to generate the pdf or you can use selenium to do something similar. The tricky part however is to adjust the window size to hold all the charts, but you can sort it out with some trials.
This is due to accessibility of variable intelligence inside child function from external function. There is a solution by quickcharts.io . Instead of using a function use const variable for write code in js string and pass.
const drawtIntelligenceBarchar = `
const data = new google.visualization.DataTable();
data.addColumn('string', 'Intelligence');
data.addColumn('number', 'Intelligence Level (%)');
data.addRows([
["Linguistic", ${intelligence["Linguistic"]}],
["Logical", ${intelligence["Logical"]}],
["Musical", ${intelligence["Musical"]}],
["Visual-Spatial", ${intelligence["Visual-Spatial"]}],
["Kinesthetic", ${intelligence["Kinesthetic"]}],
["Intra-Personal", ${intelligence["Intra-Personal"]}],
["Inter-Personal", ${intelligence["Inter-Personal"]}],
["Naturalistic", ${intelligence["Naturalistic"]}],
["Existential", ${intelligence["Existential"]}]
]);
const options = {
title: 'Intelligence Graph',
chartArea: {width: '50%'},
hAxis: {
title: 'AVERAGE GOOD VERY GOOD EXCELLENT',
minValue: 0,
maxValue: 100,
},
vAxis: {
title: 'INTELLIGENCE TYPE'
}
};
const chart = new google.visualization.BarChart(container);
chart.draw(data, options);`;
const intelligenceGraph = await GoogleChartsNode.render(drawtIntelligenceBarchar, {width: 500,
height: 300,});
doc.image(intelligenceGraph, 0, 0,)
For more information visit the source code documentation

How i can use my back camera of my phone in the browser with QUAGGA JS

i want to use Quagga js in my webapp to scan a barcodes...
The problem is that i want use the mobile's back camera, in the documentation say's that quagga use a parameter called "facingMode".
If you set facingMode = "enviroment" that use the webcam if you stay in a PC or the back camera if you stay in a Phone .
And if you set facingMode = "user" that use the mobile's front camera.
Well, i set user and enviroment for test and there is no difference, it stays the same, on the cell phone continues to use the front camera, i try to use the back camera but nothing.
I leave my code at there
function startScanner() {
Quagga.init({
inputStream: {
name: "Live",
type: "LiveStream",
target: document.querySelector('#scanner-container'),
constraints: {
width: 600,
height: 450,
facingMode: "enviroment" //or user for front camera
},
},
decoder: {
readers: [
"code_128_reader",
"ean_reader",
"ean_8_reader",
"code_39_reader",
"code_39_vin_reader",
"codabar_reader",
"upc_reader",
"upc_e_reader",
"i2of5_reader"
],
debug: {
showCanvas: true,
showPatches: true,
showFoundPatches: true,
showSkeleton: true,
showLabels: true,
showPatchLabels: true,
showRemainingPatchLabels: true,
boxFromPatches: {
showTransformed: true,
showTransformedBox: true,
showBB: true
}
}
},
}, function (err) {
if (err) {
console.log(err);
return
}
console.log("Initialization finished. Ready to start");
console.log(navigator.mediaDevices.enumerateDevices());
Quagga.start();
// Set flag to is running
_scannerIsRunning = true;
});
i tried in Android Phone