So I've written plenty of e2e tests for my backend and this is becoming overwhelming as all of test methods are in one file.
Reason I have all of them in one file is that when my app is created, TypeORM creates in-memory database instance on which I do all of the tests - I need same database to be running across tests as I am doing cross-entities tests.
This part of code is crucial. It initializes app (which also initializes db under the hood):
let app: INestApplication;
beforeAll(async () => {
const moduleFixture = await Test.createTestingModule({
imports: [AppModule],
}).compile();
app = moduleFixture.createNestApplication();
await app.init();
});
Is there a way to somehow transfer beforeAll()'s context so that it could be accessed from tests located in other files?
Maybe somehow make app global?
Ok so kind of a solution is to instead of creating in memory database, you can create a sqlite database and force TypeORM to create a file (which will be your database) by using:
export const e2eConfig: SqliteConnectionOptions = {
type: 'sqlite',
database: 'db.db',
entities: entities,
synchronize: true,
};
(sqlite is mostly compatible with mysql)
This will make your data persist between tests.
Related
I am struggling to pass launch arguments to detox. For example if I want to pass a few different users as launch args. My init file looks like:
beforeAll(async () => {
await device.launchApp({
newInstance: true,
permissions: {notifications: 'YES'},
launchArgs: {
users: {
user1: { email: '123#abc.com', pass: '123456' },
user2: { email: 'abc#123.com', pass: '654321' },
}
}
});
});
However in my test file
await device.appLaunchArgs.get();
returns an empty object. Any ideas of what I am doing wrong? Am i misunderstanding what launchArgs are for?
The purpose of the launchArgs is to send parameters to the app being tested because you can't communicate with the app process otherwise. launchArgs enable you to configure specific app behavior, either (1) to pass dynamic parameters based on your test environment (e.g. ports of another process the app needs to connect to), or (2) to simulate a condition for a specific test case (e.g. you write two e2e tests, one that has some feature flag turned on and another one with it being off).
However in my test file
You don't access the values in a test file. Since the test file runs in the same node process as the beforeEach, there's no need to pass args. In fact, you can launch the app (with the appropriate args) directly in your test case, which is especially useful for the case (2) above.
To read the launchArgs in the app, you can create an .e2e.js mock file, and then use react-native-launch-arguments to retrieve the configured values. The rest is up to you but the general idea is to use the launch args in your app to change some part of business logic or configuration you want to test.
Mobx and Redux will normally not persist any data. They will maintain a temporary global state while the app is running.
I know there are redux-persist and mobx-persist packages within both communities. But unfortunately these persisting solutions do not seem good at all. They only stringify or serialize a global state tree and persist it using some sort of key-value storage. Right?
The problem:
When such an app is open again, the stringified store will be parsed and structured back to its original data structure (JSON, for instance) and then fully loaded into the RAM memory. Am I right?
If yes, this is a problem. It is not good to always have a full "database" aka "global state" loaded in-memory. It will probably never be faster to filter data within a long array in my global state... compared to querying a table on SQLite, right?
I have been looking for some repository-like solution for persisting global state for either redux or mobx. I am yarning for some solution for persisting and querying data on some well-known mobile database like SQLite or others.
Any answers will be very much appreciated.
Indeed you can use repository pattern.
On your repository, you may have a save method.
save(group: GroupLocalStorageModel): Promise<boolean> {
let created;
this._localStorage.write(() => {
created = this._localStorage.create<GroupLocalStorageModel>("Group", group);
});
return Promise.resolve(true);
}
This method will literally save your entity to some local storage you set. In the example above, we are saving a group object to a Group collection which are like tables. We are using realm which is no-sql.
Once you have your repository, if you are using either redux or mobx, you will probably call your save method on your action. Both redux and mobx work with actions, right?
export const GroupStoreModel = types
.model("GroupStore")
.props({
groups: types.optional(types.array(GroupModel), []),
})
.extend(withEnvironment)
.actions((self) => {
return ({
_addGroupToStore(group: GroupLocalStorageModel) {
self.groups.push(group)
},
_deleteAllFromStore() {
self.groups.clear()
},
_addGroupsToStoreBatch: (groups: GroupLocalStorageModel[]) => {
self.groups.concat(groups);
},
})
})
/* Async actions */
.actions((self) => {
let groupRepository = self.environment.groupRepository;
return ({
addGroup(group: GroupLocalStorageModel) {
groupRepository.save(group).then(result => self._addGroupToStore(group))
},
getAllGroupsPaginated(page: number) {
groupRepository.getAllPaginated(page).then(groups => self._addGroupsToStoreBatch(groups));
},
deleteAll() {
groupRepository.deleteAll();
self._deleteAllFromStore();
}
})
})
In this example, we are using mobx-state-tree. And this addGroup action will update firstly our database, and then update also the global state.
We still want to use our global state so our views will be re-rendered automatically according to either connect for redux or observable for mobx.
See more informations here on the repository:
https://github.com/Hadajung/poc-react-native-database-example
AFAIK, there are two options for using sqlite with redux persist.
redux-persist-sqlite-storage: By maintainer's own word
By default redux-persist uses AsyncStorage as storage engine in react-native. This is a drop-in replacemet of AsyncStorage.
The library is inspired by react-native-sqlite-storage.
Please, remember, to use this, you need to install an additional package installed react-native-sqlite-storage
redux-persist-sqlite: By maintainer's own word
A redux-persist storage adapter that writes to sqlite.
This is adapted from https://github.com/prsn/redux-persist-sqlite-storage, but uses Node.js sqlite3 rather than react-native.
Great for Electron apps that are backed by Redux.
UPDATE: react-native-mmkv : This is developed by WeChat. As it says in its about section
An extremely fast key/value storage library for React Native. ~30x faster than AsyncStorage!
I'm not really sure what you need but if I understood you correctly you need to persist large amounts of data, and also to load that same data, but only in batches.
I believe that this kind of problem can be solved with a repository pattern and SOLID design principles.
You will need:
store class (mobx store) that holds your business logic.
repository class which is responsible for retrieving and persisting data.
The store gets the repository injected into it via the constructor.
Then when you call the initialize method on your store, it talks to the repository and retrieves the initial data. Now, initial data can be only a subset of all the data that is persisted. And you can implement some kind of paging on the store and repository, to retrieve data in batches as needed. Later you can call other methods to load and save additional data as needed.
pseudo code:
class Repository(){
initialize()// load the first batch
load(next 10 models)
save(data)
}
class Store{
constructor(repository)
initialize(){
repository.initialize()
}
load(){
repository.load()
}
save(){
repository.save()
}
}
Now your application data shouldn't be one giant object, rather it should consist of multiple stores, where each store is responsible for a part of the data. For example, you would have one store and repository for handling todos and another pair that handles address book contacts etc.
Addendum:
The reason the repository is injected into the store is so you could easily swap it for some other implementation (the store doesn't care how the data is persisted and retrieved) and also, unit testing is very easy.
You could also have a root store that would hold all other stores, so in essence you have your complete state in one place. So if you call serialize on the root store, it serializes all stores and returns one big object.
I think the best solution would be bloc hydrated or cubit hydrated package from flutter_bloc.
https://pub.dev/packages/hydrated_bloc
In background it uses Hive DB, very performant DB, and only keys are stored in memmory so it should not add huge bloat to the app like SQLite.
If you could make all you APP logic in blocks/cubits, then extra DB calls would be irelevant.
I am new to react, specifically in native and I am in the dilemma if I use redux to maintain the data store if most of these will be in local database. At the moment I have thought about having a "categories" with a single case that would be of the SET_DATA type and every time I need to filter the data I call asynchronous actions methods that load the necessary data and perform the dispatch for the reducer. An example of my code is like this (where ALL_CATEGORY and BY_NAME_CATEGORY are queries):
categoryAction:
import {SET_CATEGORIES} from "../constants/actionTypes";
import {ALL_CATEGORY, BY_NAME_CATEGORY} from "../constants/db/load";
import {dbTransaction} from '../database/dbbase';
export const setCategories = (data) => {
return {
type: SET_CATEGORIES,
payload: data
};
};
export const allCaregories = () => {
return (dispatch) => {
dbTransaction(ALL_CATEGORY)
.then((data)=> {
dispatch(setCategories(data));
}).catch((error)=> console.log('ALL_CATEGORY ERROR'));
};
};
export const byNameCaregories = () => {
return (dispatch) => {
dbTransaction(BY_NAME_CATEGORY)
.then((data)=> {
dispatch(setCategories(data));
}).catch((error)=> console.log('BY_NAME_CATEGORY_CATEGORY ERROR'));
};
};
And de categoryReducer is:
import {SET_CATEGORIES} from "../constants/actionTypes";
const initialState = [];
const categoryReducer = (state = initialState, action) => {
switch (action.type){
case SET_CATEGORIES:{
return action.payload;
}
default:
return state;
}
};
export default categoryReducer;
This works, but my query is why not choose to create methods to directly call the local database without using redux? Is there any advantage using redux or is it just to separate the application in layers?
Same case if the data were 100% in a web service, what would be the advantage of using redux if the data will always be obtained from an external source to redux
why not choose to create methods to directly call the local database
without using redux?
You can definitely choose to not use redux for your application; React/React Native or any other framework has no dependency on Redux. Redux isn't anything special, it's actually only a couple dozen lines of code to organize the way your application deals with state. In my honest but biased opinion, I would use Redux if I think an application might need to scale beyond 1,000+ (very arbitrary number) users or have more than 1 developer.
Is there any advantage using redux or is it just to separate the application in layers?
There is no advantage to using Redux beyond maintainability and standardization.
Your application might seem simple now only having to query a local database, you can definitely use custom methods to update the store. But, what happens when you need to make api requests too? Or when you need to add undo functionality? Will your code be comprehensible to new teammates? Ultimately, by using redux your chances of incurring technical debt are reduced.
Same case if the data were 100% in a web service, what would be the
advantage of using redux if the data will always be obtained from an
external source to redux
It will be no different than querying a local database. Redux doesn't care where you get data from it only handles where the Store is, when it should change, and how it should change.
Redux is handy if/when you decide that your app would benefit from having a global state store that's accessible from any component that needs it. I tend to use Redux in the following cases:
Two or more components need to access some common state and making that happen would be unreasonably difficult/messy without Redux (maybe the most important criterion).
You have components that depend upon the payload of an expensive REST call/db query/etc and it makes sense to temporarily persist that payload for future uses rather than perform the expensive call again, particularly when the above point is also true and/or your app has use cases that change that data locally.
You have payload data that requires elaborate formatting/filtering/parsing after being fetched in order to be properly used by your components (hey, APIs don't always give us what we need exactly how we need it in real life).
In your particular case, it may be that the db queries you need to make aren't particularly expensive, and you can let the db handle the filtering for you. If this is your main concern, Redux might be a bit more overhead than you need.
In Intern framework, when I specify multiple tests using functionalSuites config field and run tests using BrowserStack tunnel, only one session is created in BrowserStack (everything is treated as a single test). As a result we have a few issues:
It's practically impossible to use BrowserStack for debugging for a large amount of tests. There is no navigation, you have to scroll over a huge log.
Tests are not fully isolated. For example, localStorage is shared between all tests.
The question: how to force Intern framework to create a new session for every single test? It seems like it's impossible at the moment after looking at the codebase.
PS: I would assume that this behaviour is applied to other tunnels as well.
Use the following Gist
intern-parallel.js
Just put this file alongside intern.js and replace "intern!object" in your functional test files with "tests/intern-parallel"
Example functional test
define([
//'intern!object',
'tests/intern-parallel',
'intern/chai!assert',
'require'
], function (registerSuite, assert, require, registry) {
registerSuite({
name: 'automate first test',
'google search': function () {
return this.remote
.get(require.toUrl('https://www.google.com'))
.findByName("q")
.type("Browserstack\n")
.end()
.sleep(5000)
.takeScreenshot();
}
});
});
I am writing test for my custom DS.RESTAdapter, which uses our own SDK as transporter instead of ajax calls. Now, I want to test the adapters find, findAll, findQuery ... functions which require me to pass an instance of store as a parameter.
For example:
findAll: function(store, type, sinceToken){...}
To be able to test this, i need to pass "store" param which is not available in moduleFor in ember-qunit (unlike in moduleForModel where you can access store via this.store within the test instance).
Is there another way to gain access to the current instance of store?
Thanks.
Edit:
I solved this by creating mocks for both, store and type.
You can create a store instance by:
var store = DS.Store.create({
adapter: #subject
})
And a mock for type, just as an ordinary object with required properties for the test.
You can mock this method (for instance, using Sinon plugin for QUnit). Another solution for accessing the store (but I'm not sure it will work in your case) which helped me to access store from the global namespace is using setup and teardown methods:
setup: function () {
Ember.run(App, App.advanceReadiness);
},
teardown: function () {
App.reset();
}