Controlling which CMS environment is to be used at compile time - objective-c

I am building an iOS application which integrates with a Rails CMS. The CMS is running in multiple environments (development, staging, preprod and production). Currently the Info.plist file contains a key that stores a string value which corresponds to and determines which CMS environment the app communicates with. This is not ideal as each time the application needs to be archived for ad-hoc or App Store distribution the value must be changed (which is far too easy to forget). It is also easy to forget to change this value back to development after performing the archive process.
My question is what would be a better approach to control which CMS environment the application communicates with.
The next approach I was thinking of taking was to have the key in the Info.plist point to an environment variable. I could then edit each action of the project's scheme (Run, Test, Archive) in Xcode to set that environment variable as appropriate. The problem with that approach is that the scheme setups are not included in source control. Also only one scheme is used for creating both ad-hoc and App Store archives (which both need to use different CMS environments).

Related

How to use shared library in ASP.Net Core MVC running on IIS

I'm looking into using ASP.Net Core MVC for some of my new projects. I work on a team of developers for a very large organization, and we each individually write a lot of small web apps. Due to the size of our organization, we have a lot of rules that we have to follow, and sometimes those rules change, completely out of our control. So this is what we have used in the past projects, all running on IIS:
ASP Classic - Each IIS root folder has a shared folder, containing a lot of commonly used .asp files. These files are mostly the same on each server, but can point to different databases for dev/test/prod environments. These library files are used for common things like authentication, authorization, encryption, sending emails, etc... Each application would be in a sibling folder to the shared folder, and include files like "..\shared\library.asp"
ASP.Net / MVC - The closest thing we could find was the GAC. Everybody says not to use the GAC, but for our purposes it does exactly what we need. We built a DLL library, and store it in the GAC of each web server. We then put local configuration (dev/test/prod environment specific stuff) information on the global web.config of each IIS server. Application specific information would be stored in that application's local web.config file.
The beauty of these two systems, is sometimes things change, and we can simply go update the global libraries, and every application that depends on them will adapt to the new code without needing a recompile. We have many applications, running on many web servers. This may not be ideal, but for our needs it works perfectly, considering the rules can change at a moment's notice, and recompiling every application would be a huge ordeal. We just have to be sure not to ever introduce breaking changes into our libraries, which is simple enough. We have zero problems with how it works.
Now, on to ASP.Net Core. Is there an elegant way to do this? It seems like Core doesn't support the GAC, nor does it support web.config. Everything wants to use appsettings.json. Is there a way to create an appsettings.json at the root level of IIS, and have it set global variables like environment="dev", authdatabase="devsql" etc? And can we store a .Net Core/Standard DLL in a shared folder, and have every app load it with a path like "..\shared\library.dll"? The closest thing I could find to do this with .Net framework was the GAC, but I'm not really finding any answers for this with Core. I appreciate any help, thanks!
sometimes things change, and we can simply go update the global libraries, and every application that depends on them will adapt to the new code without needing a recompile
Note that this is exactly one of the reasons why GAC deployment is usually avoided. If you update a dependency, and that happens to contain a breaking change (in any possibly case), then applications will start to break randomly without you having control over that.
Usually, if you update a dependency, you should have to retest every application that depends on that before you deploy the updated application. That is why dependency updates (e.g. via NuGet) are deliberate choices you need to make.
.NET Core avoids this in general by never sharing assemblies between applications and by allowing different versions side-by-side. That way, you can update applications one by one without affecting others.
This is actually a primary reason why .NET Core was made in the first place: The .NET Framework is shipped with Windows, and is a global thing. All applications will always use the same framework version. So whenever Microsoft ships an update to the .NET Framework, they have to be incredibly careful not to break applications. And that is incredibly difficult because countless applications depend on all kinds of things in the framework. Even fixing a possibly obvious bug can break stuff.
With .NET Core and side-by-side dependencies, this is no longer a problem because updates will not automatically break applications that still depend on older versions. It is a developer’s explicit choice to update an application, shipping newer dependencies.
So you should actually embrace this and start to develop your applications independently. If you have common dependencies, consider creating (private) NuGet packages for those, so that applications can depend on them and so that you have a good way to update them properly.

Why is it a good idea to limit deployment of files to the user-profile or HKCU when using MSI?

Why is it a good idea to limit deployment of files to the user-profile or HKCU from my MSI or setup file?
Deployment is a crucial part of most development. Please give this content a chance. It is my firm belief that software quality can be dramatically improved by small changes in application design to make deployment more logical and more reliable - that is what this "answer" is all about - software development.
This is a Q/A-style question split from an answer that became too long: How do I avoid common design flaws in my WiX / MSI deployment solution?.
As stated above this section was split from an existing answer with broader scope: How do I avoid common design flaws in my WiX / MSI deployment solution? (an answer intended to help developers make better deployment decisions).
9. Overuse of per-user file and registry deployment.
Some applications won't run correctly for all users on a machine, because the user-specific data added during installation isn't correctly added to other user's profiles and registry. In other words the application just works for the user who installed the software. This is obviously a serious design error.
There are several ways to "fix" this, but the whole issue of deployment of per-user files and settings is somewhat messy for a few fundamental reasons:
How do you reference count components installed multiple times? (for each user on the machine)
What do you do with the installed data and settings on uninstall?
How do you deal with new files and settings to install that differ from the ones that are on disk and in the registry and have user-made changes? Surely you don't overwrite automatically?
There are no real clear cut answers, but there are several alternative ways to deal with the "problems". My preferred options are 2 & 3 since I don't think Windows installer should deploy, track or attempt to modify or worse yet, uninstall user data and settings at all - it is user data that shouldn't be meddled with:
9.1 Using Windows Installer Self-Repair or similar
The first option is to get settings and files and HKCU registry keys deployed properly via the setup itself or setup-like features. There are two major ways to do this: relying on Windows Installer "self-repair" generally triggered by an advertised shortcut, or using Microsoft Active Setup.
Self-repair is what happens when you launch a shortcut to start your application, and Windows Installer kicks in and you see a progress bar whilst "something" is being installed. What is typically added are HKCU registry entries and user-profile files.
There is also another alternative to achieve this, it is called Active Setup and is also a Microsoft feature. It essentially registers "something runnable" to run once per user on logon. This can be used to set up per-user data. Active Setup allows "anything runnable" to be executed - for example a copy of files to the user-profile. .
Both of these options mean that the user data and settings are copied in place once - and from then on they are not generally touched, but in the case of "self-repair" might get uninstalled for any user who actually runs the uninstall of the application (unless the setup is designed not to do so).
Although setting up user data with self-repair and Active Setup are "established" methods to get applications running properly, it seems wrong to track user data with Windows Installer components. Why? Because it is really user data that shouldn't be meddled with once initialized.
Accordingly my honest take on the whole issue is to try to avoid deploying user specific data or registry keys and values altogether, and this is what is described next as two other user-data deployment methods.
9.2 Application Initialization of User Data
The second alternative, and one that I find much cleaner, is to change your application executable to be able to initialize all per-user settings and files based on default setting and templates copied from a per-machine location or based on application internal defaults (from the source code) instead of writing them via your setup.
In this scenario Windows Installer will not track the files or settings that are copied to each user. It is treated as user data that should not be interfered with at all. This avoids all interference such as reset or overwritten user data during upgrades and self-repair (and manual uninstall and reinstall).
If there are cases where "fixes" must be made to application settings, this can be achieved by having the application executable update the settings for each users on launch, and then tag the registry that the update has been completed.
The overall "conclusion" is that your setup should prepare your application for first launch, it should not set up the user data and settings environment. All user-profile files and HKCU settings should be defaulted by the application in case they are missing on launch - this yields a much more robust application that is easier to test for QA personnel as well. This is particularly important for Terminal Servers where self-repair is not allowed to run at all. In such cases the application data will be missing if you rely on self-repair to put user data in place.
9.3 "Cloud" or Database Storage of User Settings
To take things a step further in today's "cloud environment" - and this is in my opinion the preferred option. Why should your application be restricted to files and registry keys and values? Why not store all user specific settings in the solution's database?
Full access, control and persistence for all settings without any deployment issues at all.
You do get new management issues though, and they must be shared between developers, system administrators and database administrators. But isn't the cloud pretty much the industry standard by now?
We have been struggling long enough with roaming profiles, corrupted user registry, mishandled user-profile data files, etc.... Developers, save yourself a lot of trouble, and create yourself some new database management issues instead of deployment issues - and start yelling at a whole new bunch of people! :-).
Settings in databases are:
Not suffering from "dual source problems". There is one instance, and it is updated in real time. Not like the synchronization problems seen with user-profile and "roaming".
Inspectable, manageable and patchable
Revisable (version control - can revert older settings)
You could even "tweak" all the user settings from your setup still by running database scripts as part of deployment, but if you are in a corporate environment - isn't the thought of just raising a ticket and then have your database administrator run the maintenance scripts with proper transaction support and rollback much more appealing?
Even if you are delivering a large, fat-client vendor application for general distribution and third party use (in other words not a tailored, corporate client/server solution where you are guaranteed to have a back end database), one should consider cloud storage of user settings by having users log on to a cloud using their email or similar and then synchronize settings in real time.
Such large applications generally always need to "cache" some settings files on the computer and in HKCU, but it seems more and more possible to save all settings in a single temporary file in the user profile area which is entirely "sacrificial" and even possible to delete if it is corrupted and then download the last saved settings.
Instead of hosting the cloud yourself, it is obviously possible to use company DBOs to configure their own company-wide cloud where they have full control of all settings, and can also enforce mandatory policies and restrictions for your software's operation. Not to mention the proper backup that is possible for all user settings.

IBM Worklight - is Direct Update allowed by Apple's guidelines for the App Store?

I read about Worklight's Direct Update feature already. However, I still have some questions that would like to clarify:
Q1: Is it true that Apple allows Worklight Apps to be published to APP
Store even there is a direct update feature?
Q2: How will Apple review and monitor the Worklight Apps' content if
there is a huge change after the direct update? Or, Apple does not
worry about the cached web resource in the application, does it?
Q3: Is there any limitation or pre-condition about the direct update
for the web resource? For example, the major entries of html and js
script files must be existed... etc.
Q1: Is it true that Apple allows Worklight Apps to be published to APP Store even there is a direct update feature?
A1: There are existing Worklight customers that have submitted an application to the App Store and passed Apple's app submission process. For best results, make sure you use Worklight v5.0.6.1 or later.
Q2: How will Apple review and monitor the Worklight Apps' content if there is a huge change after the direct update? Or, Apple does not worry about the cached web resource in the application, does it?
A2: Apple only reviews app submissions to the App Store and whether or not they follow their guidelines. They do not review future updates to the application (as long as it was not re-submitted), for example in the form of a Direct Update unless there are some extra-ordinary circumstances (like inappropriate content that was discovered afterwards, for example...)
Q3: Is there any limitation or pre-condition about the direct update for the web resource? For example, the major entries of html and js script files must be existed... etc.
A3: I am not entirely sure I understand the question. There is no limitation in Direct Update - this feature replaces the existing web resources of an application with new ones. The only thing I can think of is that both the Worklight Studio (that the app was created on) and Worklight Server (that the app lives on) must be of the same version number.
An update.
Apple now allows code updates if you use a webview
3.3.2 An Application may not download or install executable code. Interpreted code may only be used in an Application if all scripts,
code and interpreters are packaged in the Application and not
downloaded. The only exception to the foregoing is scripts and code
downloaded and run by Apple's built- in WebKit framework, provided
that such scripts and code do not change the primary purpose of the
Application by providing features or functionality that are
inconsistent with the intended and advertised purpose of the
Application as submitted to the App Store.

How to recognize programmatically that application is installed vs development mode?

I'm trying to get information about license info of my app and MSDN docs (http://msdn.microsoft.com/en-us/library/windows/apps/hh694065.aspx) advice to use Windows.ApplicationModel.Store.CurrentAppSimulator class for that purposes during development/testing and when submitting app to store replace that class with Windows.ApplicationModel.Store.CurrentApp.
I wonder if there is any way to check in code (javascript in my case) if app is already installed from store so my code should use proper class and I won't have to remember every time I submit update of app to store to replacing those classes properly.
As far as I know, I could not find such thing. In fact, LicenseInfo is what provides information about the store listing.
I use a config.js file to keep settings at place which change between development and production. For example - if your app talks to a service, service URL also will likely change between development and production; the service might be running at localhost for development and for production in azure environment. I keep a bool in here and change by hand.
I have not automated it fully. but it is likely possible. need to dig through the msbuild logs for the build created for the store. if there is configuration setting found, then project can have two config.dev.js and config.release.js and msbuild need to conditionally pick the right file. I haven't looked into this yet.
I think I found at solution as described here WinJS are there #DEBUG or #RELEASE directives? . Not ideal, but works for me.

Determining if the App is running locally or has been deployed through the App Store

Is there a way to determine if the App is running locally or has been deployed through the App Store?
I would like to test the trial mode functionality using Windows.ApplicationModel.Store.CurrentAppSimulator during development but default to Windows.ApplicationModel.Store.CurrentApp if the app has been downloaded from the store by a regular user.
I don't believe this is easy to do. I suspect the easiest way is through conditional compliation, and produce a specific build for submission. You can use Ajaxmin for this, but that would require a little bit of setting up.
Given that an application when deployed is supposed to be in distinguishable no matter it's mechanism, I don't think this:
http://msdn.microsoft.com/en-us/library/windows/apps/windows.applicationmodel.package.installedlocation.aspx
Will help. It will plausibly tell you if you've been deployed from VS (which deploys loose files), rather than as a package.