I'm creating an ecommerce app that uses a geolocation library (https://github.com/transistorsoft/react-native-background-geolocation).
I have an orderState:
const ordersInitState = {
lineItems: [],
status: ORDER_STATUSES.AWAITING_CHECKOUT,
};
const ordersReducer = (prevState=ordersInitState, action) => {
switch(action.type) {
...
case actions.ORDERS.REMOVE_ITEM:
const lineItems = [...prevState.lineItems];
const indexToRemove = action.payload;
lineItems.splice(indexToRemove, 1);
const status = lineItems.length > 0 ? prevState.status : ORDER_STATUSES.AWAITING_CHECKOUT;
return {
...prevState,
status,
lineItems,
};
default:
return prevState;
}
}
export default ordersReducer;
As you can see, the client is allowed to remove items from their cart. If they end up removing everything, their order status will reset. If they do end up emptying their cart (lineItems.length === 0) I want to also run a simple line from the geolocation library:
BackgroundGeolocation.removeGeofence("blah");
Where would I put this? It feels wrong to do it in the reducer because it has nothing to do with state. It also isn't specific to one particular component, so putting it in one of my components doesn't make sense.
I'm still a bit new to redux so I'm not sure where to put non-state related methods.
The often used name for what you are looking for is called "side effects" middleware. In the abstract, you want to cause an effect in an external system (in this case, the geolocation library), when the application state changes.
There are many libraries for this use case. Some of the more popular ones are redux-saga and redux-loop. They are both good tools and help give structure to managing complicated side effects, but both come with a significant conceptual overhead, and should only be used when really needed.
If you want a quick and simple solution, you can create a plain JavaScript module that subscribes to your store changes and executes the side effects for you:
import store from '../your/redux/store;
let previousCount = 0;
store.subscribe(() => {
const count = store.getState().orders.lineItems.length;
if (count === 0 && previousCount > 0) {
// someone just emptied the cart, so execute side-effect
BackgroundGeolocation.removeGeofence("blah");
}
previousCount = count;
});
And then if you find yourself needing this type of solution repeatedly, you can reach for one of the libraries mentioned above.
Related
OK, say I have an initial state in our Redux store that looks like this:
const initialState = {
userReports: [],
activeReport: null,
}
userReports is a list of reports. activeReport is one of those reports (the one that is actively being worked with).
I want the active report to point to one in the array. In other words, if I modify the active report, it would modify one in the userReports array. This means, the two objects must point to the same memory space. That's easy to set up.
The alternative to this approach would be to copy one of the reports that is in the userReports array and set it as the active report (now it has a different memory address). The problem is now, when I edit the activeReport, I also have to search through the array of userReports, find the report that resembles the active report and modify it there too. This feels verbose.
Here is the question:
Would it be bad practice to have the activeReport point to a report in the array (same object). When I want to change the report I could do something like this (example is using redux thunk):
export const updateReport = (report) => async (dispatch, getState) => {
try {
const report = getState().reports.activeReport
// modify the active report here
report.title = "blah blah blah"
dispatch({ type: ACTIONS.UPDATE_REPORT, payload: report })
} catch (error) {
console.log(`ERROR: ${error.message}`)
}
}
And in my reducer:
case ACTIONS.UPDATE_REPORT:
return { ...state, activeReport: action.payload }
as you can see, after updating the report I still return a "new version" of that report and set it as active, but this approach also updates the report in the userReports array because they point to the same memory address.
I would say thats not ideal, do the reports have id's? If they do I would rather hold the userReports in an object with keys being the id's, then active report can just be an id and renamed to activeReportId so you can fetch the activeReport with userReports[activeReportId]
You also asked for reasons:
So firstly any screen that looks at userReports wont rerender because the reports aren't being reassigned.
Secondly if someone later wants to update those screens they will reassign userReports which could cause problems.
Thirdly its an unusual pattern which is a huge no no for redux. The point of redux is that it has a very obvious pattern so when you add things to it you don't have to think and can just make changes with confidence.
Your activeReport should not be pointing to an object in the userReports array, but rather it should be an id of the report, which the user is currently working on. Each of the report in the userReports will have a unique id field to identify the report - this would be helpful when rendering in react - this id field can be used as key.
Then your action creator/dispatcher will look like this:
export const updateReport = (updatedReport) => async (dispatch, getState) => {
dispatch({ type: ACTIONS.UPDATE_REPORT, payload: updatedReport });
}
You will call this on change in your component:
const onTitleChangeHandler = (e) => {
var newTitle = e.target.value;
// you will get the userReports and activeReport from props or by using some redux selector, also you will need to get dispatch and getState from redux
var activeReportObj = userReports.filter((r) => r.id === activeReport)[0];
updateReport({ title: newTitle, ...activeReportObj })(dispatch, getState);
}
Lastly, your reducer will be:
case ACTIONS.UPDATE_REPORT:
var newUserReports = state.userReports.map((r) => {
if (r.id === state.activeReport) {
return action.payload;
}
return r;
});
return { newUserReports, ...state };
I am trying to use OpenSea API and I noticed that I need to set a limit before retrieving assets
https://docs.opensea.io/reference/getting-assets
I figured I can use the offset to navigate through all the items, even though that's tedious. But the problem is offset itself has a limit, so are assets beyond the max offset inaccessible ?
I read that you that the API is "rate-limited" without an API key, so I assume that related to the number of requests you can make in a certain time period, am I correct about that? Or does it lift the limit of returned assets ? The documentation isn't clear about that https://docs.opensea.io/reference/api-overview
What can I do to navigate through all the assets ?
May be late answering this one, but I had a similar problem. You can only access a limited number (50) assets if using the API.
Using the API referenced on the page you linked to, you could do a for loop to grab assets of a collection in a range. For example, using Python:
import requests
def get_asset(collection_address:str, asset_id:str) ->str:
url = "https://api.opensea.io/api/v1/assets?token_ids="+asset_id+"&asset_contract_address="+collection_address+"&order_direction=desc&offset=0&limit=20"
response = requests.request("GET", url)
asset_details = response.text
return asset_details
#using the Dogepound collection with address 0x73883743dd9894bd2d43e975465b50df8d3af3b2
collection_address = '0x73883743dd9894bd2d43e975465b50df8d3af3b2'
asset_ids = [i for i in range(10)]
assets = [get_asset(collection_address, str(i)) for i in asset_ids]
print(assets)
For me, I actually used Typescript because that's what opensea use for their SDK (https://github.com/ProjectOpenSea/opensea-js). It's a bit more versatile and allows you to automate making offers, purchases and sales on assets. Anyway here's how you can get all of those assets in Typescript (you may need a few more dependencies than those referenced below):
import * as Web3 from 'web3'
import { OpenSeaPort, Network } from 'opensea-js'
// This example provider won't let you make transactions, only read-only calls:
const provider = new Web3.providers.HttpProvider('https://mainnet.infura.io')
const seaport = new OpenSeaPort(provider, {
networkName: Network.Main
})
async function getAssets(seaport: OpenSeaPort, collectionAddress: string, tokenIDRange:number) {
let assets:Array<any> = []
for (let i=0; i<tokenIDRange; i++) {
try {
let results = await client.api.getAsset({'collectionAddress':collectionAddress, 'tokenId': i,})
assets = [...assets, results ]
} catch (err) {
console.log(err)
}
}
return Promise.all(assets)
}
(async () => {
const seaport = connectToOpenSea();
const assets = await getAssets(seaport, collectionAddress, 10);
//Do something with assets
})();
The final thing to be aware of is that their API is rate limited, like you said. So you can only make a certain number of calls to their API within a time frame before you get a pesky 429 error. So either find a way of bypassing rate limits or put a timer on your requests.
I am working with Nuxt and Vue, with MySQL database, all of which are new to me. I am transitioning out of WebMatrix, where I had a single Admin page for multiple tables, with dropdowns for selecting a particular option. On this page, I could elect to add, edit or delete the selected option, say a composer or music piece. Here is some code for just 2 of the tables (gets a runtime error of module build failed):
<script>
export default {
async asyncData(context) {
let [{arrangers}, {composers}] = await Promise.all([
context.$axios.get(`/api/arrangers`),
context.$axios.get(`/api/composers`),
])
const {arrangers} = await context.$axios.get('/api/arrangers')
const {composers} = await context.$axios.get('/api/composers')
return { arrangers, composers }
},
}
</script>
You do have the same variable name for both the input (left part of Promise.all) and as the result from your axios call, to avoid naming collision, you can rename the result and return this:
const { arrangers: fetchedArrangers } = await context.$axios.get('/api/arrangers')
const { composers: fetchedComposers } = await context.$axios.get('/api/composers')
return { fetchedArrangers, fetchedComposers }
EDIT, this is how I'd write it
async asyncData({ $axios }) {
const [posts, comments] = await Promise.all([
$axios.$get('https://jsonplaceholder.typicode.com/posts'),
$axios.$get('https://jsonplaceholder.typicode.com/comments'),
])
console.log('posts', posts)
console.log('comments', comments)
return { posts, comments }
},
When you destructure at the end of the result of a Promise.all, you need to destructure depending of the result that you'll get from the API. Usually, you do have data, so { arrangers } or { composers } will usually not work. Of course, it depends of your own API and if you return this type of data.
Since destructuring 2 data is not doable, it's better to simply use array destructuring. This way, it will return the object with a data array inside of it.
To directly have access to the data, you can use the $get shortcut, which comes handy in our case. Directly destructuring $axios is a nice to have too, will remove the dispensable context.
In my example, I've used JSONplaceholder to have a classic API behavior (especially the data part) but it can work like this with any API.
Here is the end result.
Also, this is what happens if you simply use this.$axios.get: you will have the famous data that you will need to access to later on (.data) at some point to only use the useful part of the API's response. That's why I do love the $get shortcut, goes to the point faster.
PS: all of this is possible because Promise.all preserve the order of the calls: https://stackoverflow.com/a/28066851/8816585
EDIT2: an example on how to make it more flexible could be
async asyncData({ $axios }) {
const urlEndpointsToFetchFrom = ['comments', 'photos', 'albums', 'todos', 'posts']
const allResponses = await Promise.all(
urlEndpointsToFetchFrom.map((url) => $axios.$get(`https://jsonplaceholder.typicode.com/${url}`)),
)
const [comments, photos, albums, todos, posts] = allResponses
return { comments, photos, albums, todos, posts }
},
Of course, preserving the order in the array destructuring is important. It's maybe doable in a dynamic way but I don't know how tbh.
Also, I cannot recommend enough to also try the fetch() hook alternative someday. I found it more flexible and it does have a nice $fetchState.pending helper, more here: https://nuxtjs.org/blog/understanding-how-fetch-works-in-nuxt-2-12/ and in the article on the bottom of the page.
Problem: ignore some part of the .snap file test results
the question here: there are some components in my test that have a random values and i don't really care about testing them. is there any way to ignore part of my X.snap file? so when i run tests in the future it won't give me test fail results.
Now you can also use property matcher for these cases.
By example to be able to use snapshot with these object :
const obj = {
id: dynamic(),
foo: 'bar',
other: 'value',
val: 1,
};
You can use :
expect(obj).toMatchSnapshot({
id: expect.any(String),
});
Jest will just check that id is a String and will process the other fields in the snapshot as usual.
Actually, you need to mock the moving parts.
As stated in jest docs:
Your tests should be deterministic. That is, running the same tests multiple times on a component that has not changed should produce the same results every time. You're responsible for making sure your generated snapshots do not include platform specific or other non-deterministic data.
If it's something related to time, you could use
Date.now = jest.fn(() => 1482363367071);
I know it's quite old question but I know one more solution. You can modify property you want to ignore, so it will be always constant instead of random / dynamic. This is best for cases when you are using third party code and thus may not be able to control the non deterministic property generation
Example:
import React from 'react';
import Enzyme, { shallow } from 'enzyme';
import Adapter from 'enzyme-adapter-react-16';
import Card from './Card';
import toJSON from 'enzyme-to-json';
Enzyme.configure({ adapter: new Adapter() });
describe('<Card />', () => {
it('renders <Card /> component', () => {
const card = shallow(
<Card
baseChance={1}
name={`test name`}
description={`long description`}
imageURL={'https://d2ph5fj80uercy.cloudfront.net/03/cat1425.jpg'}
id={0}
canBeIgnored={false}
isPassive={false}
/>
);
const snapshot = toJSON(card);
// for some reason snapshot.node.props.style.backgroundColor = "#cfc5f6"
// does not work, seems the prop is being set later
Object.defineProperty(snapshot.node.props.style, 'backgroundColor', { value: "#cfc5f6", writable: false });
// second expect statement is enaugh but this is the prop we care about:
expect(snapshot.node.props.style.backgroundColor).toBe("#cfc5f6");
expect(snapshot).toMatchSnapshot();
});
});
You can ignore some parts in the snapshot tests replacing the properties in the HTML. Using jest with testing-library, it would look something like this:
it('should match snapshot', async () => {
expect(removeUnstableHtmlProperties(await screen.findByTestId('main-container'))).toMatchSnapshot();
});
function removeUnstableHtmlProperties(htmlElement: HTMLElement) {
const domHTML = prettyDOM(htmlElement, Infinity);
if (!domHTML) return undefined;
return domHTML.replace(/id(.*)"(.*)"/g, '');
}
I used this to override moment's fromNow to make my snapshots deterministic:
import moment, {Moment} from "moment";
moment.fn.fromNow = jest.fn(function (this: Moment) {
const withoutSuffix = false;
return this.from(moment("2023-01-12T20:14:00"), withoutSuffix);
});
I'm learning with Titanium to make iPhone/Android apps. I'm using Alloy MVC framework. I never used javascript before, apart from simple scripts in HTML to access the DOM or something like that, so I never needed to structure the code before.
Now, with Titanium, I must use a lot of JS code and I was looking for ways to structure my code. Basically I found 3 ways to do it: prototype, namespace and functions inside functions.
Simple example for each:
Prototype:
NavigationController = function() {
this.windowStack = [];
};
NavigationController.prototype.open = function(windowToOpen) {
//add the window to the stack of windows managed by the controller
this.windowStack.push(windowToOpen);
//grab a copy of the current nav controller for use in the callback
var that = this;
windowToOpen.addEventListener('close', function() {
if (that.windowStack.length > 1)
{
that.windowStack.pop();
}
});
if(Ti.Platform.osname === 'android') {
windowToOpen.open();
} else {
this.navGroup.open(windowToOpen);
}
};
NavigationController.prototype.back = function(w) {
//store a copy of all the current windows on the stack
if(Ti.Platform.osname === 'android') {
w.close();
} else {
this.navGroup.close(w);
}
};
module.exports = NavigationController;
Using it as:
var NavigationController = require('navigator');
var navController = new NavigationController();
Namespace (or I think is something like that, coz the use of me = {}):
exports.createNavigatorGroup = function() {
var me = {};
if (OS_IOS) {
var navGroup = Titanium.UI.iPhone.createNavigationGroup();
var winNav = Titanium.UI.createWindow();
winNav.add(navGroup);
me.open = function(win) {
if (!navGroup.window) {
// First time call, add the window to the navigator and open the navigator window
navGroup.window = win;
winNav.open();
} else {
// All other calls, open the window through the navigator
navGroup.open(win);
}
};
me.setRightButton = function(win, button) {
win.setRightNavButton(button);
};
me.close = function(win) {
if (navGroup.window) {
// Close the window on this nav
navGroup.close(win);
}
};
};
return me;
};
Using it as:
var ui = require('navigation');
var nav = ui.createNavigatorGroup();
Functions inside functions:
function foobar(){
this.foo = function(){
console.log('Hello foo');
}
this.bar = function(){
console.log('Hello bar');
}
}
// expose foobar to other modules
exports.foobar = foobar;
Using it as:
var foobar = require('foobar').foobar
var test = new foobar();
test.bar(); // 'Hello bar'
And now my question is: which is the better to maintain code clean and clear? It seems that prototype is clear an easy to read/mantain. Namespace confuses me a bit but only needs to execute the initial function to be "available" (no use of new while declaring it, I suppose because it returns the object?namespace? "me"). Finally, functions inside functions is similar to the last, so I don't know exactly the difference, but is useful to export only the main function and have all the inside functions available for use it later.
Maybe the last two possibilities are the same, and I'm messing concepts.
Remember that I'm searching for a good way to structure the code and have functions available to other modules and also inside the own module.
I appreciate any clarification.
In the examples that they release, Appcelerator appears to follow the non-prototype approach. You can see it in the examples they have released: https://github.com/appcelerator/Field-Service-App.
I've seen a lot of different approaches to structuring applications in Titanium before Alloy. Since Alloy, I've found following the development team's examples helpful to me.
With that being said, it seems to me that all of this is still under interpretation and open to change and community development. Before Alloy there were some great community suggestions on structuring an app and I believe that it is still open with Alloy. Often when I find someone's example code I see something they did with it that appears to organize it a bit better than I thought of. It seems to make it a bit easier.
I think you should structure your application in a way that makes sense to you. You may stumble on to a better and easier way of developing applications with Alloy, because you are looking at it critically.
I haven't found a lot of extensive Alloy examples, but Field-Service-App makes sense to me. They have a nice separation of the elements in the application beyond MVC. Check it out.