Unfortunately, I am not much aware of these two terms and I have a feeling that I need to know more about these as I am approaching an app release.
so, if I am running the app on development mode am I not using exactly the same code as production? Like what does it actually change and whats the purpose of it?
If it's in the sense of server than that's understandable, I don't wanna mess with the server that's being used by the users, so I guess i need to connect to a second server - development, however, I am interested to know what does it change in my code? I am still gonna use the same locally stored project right?
Sorry for being so naive!
The development build is used - as the name suggests - for development reasons. You have Source Maps, debugging and often times hot reloading ability in those builds.
React Native includes some very useful tools for development: remote JavaScript debugging in Chrome, live reload, hot reloading, and an element inspector similar to the beloved inspector that you use in Chrome.
The production build, on the other hand, runs in production mode which means this is the code running on your client's side. The production build runs uglify and builds your source files into one or multiple minimized files. There's also no source maps or hot reloading included.
Further more , Production mode is most useful for two things. They are
Testing your app's performance, as Development slows your app down considerably and Catching bugs that only show up in production.
Hope this might help
https://docs.expo.io/versions/latest/workflow/development-mode/
Related
I am trying to build a desktop app.
I am thinking of using electron on the recommendation of a web-developer friend of mine, but as I am the only sole developer, I don't have the means to test the software on different platforms(OS, hardware etc.).So I am anticipating that this will cause a problem later, in the end, to test/debug software on different platforms and different OS.
I have ruled out web-apps because of some privacy concerns of the users for the remote data hosting.
Software is pretty lightweight and is almost equivalent to the image viewer apps with some slight modifications.
How to solve the problem of variations of different platforms?
Any literature suggestions pointing me in the general direction are also welcome.
Sometimes it helps to think of Electron as two processes.
The renderer vs the main processes. Generally the renderer process which runs the HTML/CSS/JS is it's own isolated component, and you communicate to the main process using IPC.
So generally for the UI, you can use mostly any web based testing framework to test reliability. At Amna, for example, we use Cypress as our E2E testing platform. You an also use something like QAWolf. Both should work with localhost. In general, most website testing tools should work fine, and consistently across platforms.
Where this gets tricky is when a UI functionality makes a call to the OS or the main process. For example, saving to the disk, or launching a program.
The general flow is this, and I've yet to find radically simpler options:
Set-Up a VM or buy a machine with the corresponding OS. I used Spot VMs in Azure for this.
Manually test the scenarios you care about in each VM before you ship
If you have a lot of cases that rely on the OS, then you should be able to further optimize this by using an automated test runner like Spectron.
From experience, what I've realized is that most of the iterations I do happen more on the UI than the underlying functions with the cross-platform capabilities. And if your code has good separation (e.g. contextIsolation:true, nodeIntegration:false), it should be pretty obvious when you need to do an entire "cross-platform" test vs just UI tests.
I'm not familiar with a lot of large-scale electron testing frameworks, I do know that ToDesktop handles package building and generating binaries to perform a smoke test and verify things open across different operating systems.
It depends.
The answer depends on what you are building, so it makes sense to figure out what you actually want to build. Some questions you might ask yourself:
Do I need a database?
Do I need authentication?
Do I need portability?
Do I need speed to market?
Do I want to pick a language I'm familiar in?
These are all good questions and there are dozens more we all ask ourselves. However, back to your original question.
Electron is a fine choice
Yes, there are alternatives. But Electron is used for Visual Studio Code, Facebook Messenger, Microsoft Teams and Figma. Choosing Electron means there are other developers making apps and there are proven apps in the market so you don't have to worry about a dead ecosystem.
Electron is easy to onboard if you know web technologies, think js, html and css. If you know these, you can transfer your web dev knowledge and make a cross-platform app. You don't have to worry about learning each OS since the UI is the webpage which will look mostly* the same between each OS. (*some very minor differences, but essentially the same).
Cross-platform deployment is easy
There are a few ways of bringing your app to multiple platforms, I happen to be most familiar with electron-builder, but the other two solutions work as well.
Many templates to start with
I am biased, since I'm the author of secure-electron-template which is one of the many templates you can choose from when starting an app. However, I recently reviewed all Electron templates and found that only 4 do not have serious security vulnerabilities.
The Electron framework frequently is updated, and over the course of the past few years there has been a shift in the way Electron apps are made. Some earlier frameworks didn't have good secure defaults which some of the older Electron templates inherited and thus, aren't as secure as new frameworks that follow security guidelines.
If you decide on Electron, give my template a try. It's got a number of features I'm building out in order to help the community with features they might want (ie. internationalization (i18n), saving local data, custom context menus, page routing, e2e unit testing, and how one can use license key validation, to name a few things).
Originally I ran a local server on my PC in order to make my django REST api available for my React Native app to reach out to through my computer's ip. So I had a base url hardcoded into my js network utilities as http://10.0.0.xxx:8000/api/ which I used as the basis for all my network calls. Recently I deployed my backend to Heroku so that I could demonstrate my app when away from my computer. So for now I just made a second hardcoded base url of https://my-cool-app.heroku.com/api/ which I manually flip back and forth between in my js code depending on if I want to use my local server (for debugging while devving) or the remote server for demonstration (and by "manually flip back and forth", I mean I literally change my code to point to one or the other).
I understand this is a terrible way to go about things and that I'm missing some major pieces to the puzzle that probably apply not just to RN projects but to most full stack projects where the frontend and backend are not hosted on the same server. I know I can look for the __DEV__ flag to see if I'm working in a debug or release version, but then would I have to keep two versions of the app on my phone somehow? Also, does it even make sense to keep my base urls hanging around on the front end, or should they be dispensed from the backend in some way instead?
I personally use :
https://github.com/zetachang/react-native-dotenv
for my environment variables like my backend api and other configs based on the env.
Since it's similar to many backend libs like django or laravel, i absolutely love this library for managing environment variables :)
We're developing a solution which uses Ektron. As part of our solution we all have local IIS instances (localhost) and deploy to this local instance as part of the development life cycle.
The problem is that after a deployment and once dll's are replaced IIS restarts and the app pool is recycled, this means that Ektron dll's need to reload themselves.
This process takes an extended amount of time.
Is there anyway to improve the loading time of "Ektron"
To some extent, this is the nature of a large app running as a website rather than a web application. Removing the workarea from your local environment is one way to get this compile time down, though this will naturally not work depending on your workflow, for example if you are not using a separate dev DB or if you are storing the workarea in source control.
I have seen some attempts to pre-complile the workarea and keep the working code in a separate project (http://dev.ektron.com/forum.aspx?g=posts&t=10996) but this approach will only speed up your builds, not the recompilation of individual pages that will occur after a build as a result of running as a web site.
The last (and least best-practice) solution is to simply avoid making code changes that cause a recompile, like modifying app_code. Apps running as websites are perfectly happy to recompile a single page's codebehind without regenerating DLLs, which is advantageous for productivity but ultimately discourages good practices like reusing code in libraries. Keep in mind that this is terrible advice, but if you have a deadline and are staring at an ektron page loading every 30 minutes it can be useful to know.
Same problem here. I found this: http://brianpereras.blogspot.com/2013/06/ektron-85-86-workarea-is-slow-compared.html
That says that the help documentation was moved to be retrieved from an online source (documentation.ektron.com). We're running Ektron 9, and I just made this change and it seems much faster on first load (after iisreset).
The solution is to set documentation.ektron.com to 127.0.0.1 in your hosts file.
There is not, this is just how IIS works. Instead of running a local instance of Ektron it's a good idea just to point your web.config file to the database of your test database and copy the /workarea folder to your local PC. You can't edit ektron locally but you can change the data on your test server and it will show up locally.
Is there a way to determine if the App is running locally or has been deployed through the App Store?
I would like to test the trial mode functionality using Windows.ApplicationModel.Store.CurrentAppSimulator during development but default to Windows.ApplicationModel.Store.CurrentApp if the app has been downloaded from the store by a regular user.
I don't believe this is easy to do. I suspect the easiest way is through conditional compliation, and produce a specific build for submission. You can use Ajaxmin for this, but that would require a little bit of setting up.
Given that an application when deployed is supposed to be in distinguishable no matter it's mechanism, I don't think this:
http://msdn.microsoft.com/en-us/library/windows/apps/windows.applicationmodel.package.installedlocation.aspx
Will help. It will plausibly tell you if you've been deployed from VS (which deploys loose files), rather than as a package.
I'm very new to C.I. but I have recently inherited a project where Team City has just been implemented and I'm slowly getting my head around it. One thing we would like to do is run some Selenium Tests as part of the build process. I've created the selenium tests and can run them successfully via nunit-console on my development machine. The build server builds the project and then deploys it (A web forms application as it happens) to a staging server.
Before each selenium test we set the database to a known state, i.e. to only have certain records in place - that way each test is independent of the others. The problem is the staging server will be used by real "human" testers so this would cause them a problem with the database continually being reset (records being removed etc.) The question is should I really also deploy the application to a virtual directory on the build server and run the selenium tests against that and only deploy to the staging server if those tests pass?
Or have I got this stuff completely wrong? If so how do you do it in your organisation?
I suggest that you do not mix your automated and manual testing by allowing your testers to access the server that is staged for your automated tests. This can cause false negatives both in your automated and your manual tests. These 'bugs' are indeterministic and more than likely never reproducible (a very bad news). This will cause you a lot of unnecessary 'bug reports' and build failures.
So here is what you can do...
In addition to your current setup, you can create an extra staged server for your manual testers. This is the least you should do. You should probably create several of them, one for each tester.
And here comes the rant...
In my current project we recently found out that our testers (we had ~10 of them) reused one server. They claimed that since our app is going to have multiple concurrent users, it was a good idea that while they are testing the individual functionalities, they are also testing how these functionalities act while multiple users are working on the same server. WRONG!
If multiple users are a concern, there should be test cases for the specific concerns. If functionality#1 can interfere with functionality#2, it should be specifically tested and not just be 'tested-by-luck'.
Before this was explained to our manual testers, we had many false bug reports due to the fact that one tester was simply stepping on another tester's toes. (e.g. tester1 deleted a record that tester2 introduced to the system, etc...). This created a lot of unnecessary bug reports and these bugs were never reproducible.
Sorry about the rant, I hoped this still helps :)