Alternate lambda functions deployment for production - serverless-framework

I'm developping a web extension for the chrome store that calls a backend deployed on aws lambda using the serverless framework.
When developing the Rest API, I may introduce breaking chances. As publishing an update on the chrome store can take a lot of time and is unpredictable (1 day to 3 weeks), the solution I'm think about to keep a compatible API with the extension is to deploy 2 different lambda functions in production.
The idea is when I push new changes on the master branch, the lambda function with the oldest version is updated and ready to receive calls as soon as the update is approved on the chrome store, without erasing the API currently used.
First, is it a good pattern to handle the updates of a client consuming an API when you don't have full control on them?
Second, is this something doable with the serverless framework and how? I couln't find any resources on the subject.
Thanks

Related

Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?

I want to deploy odoo as cheap as possible. I tried with gcloud sql (15-30€/m) + cloud run. But after some minutes passed the odoo interface shows me a white screen with so many logs in the console similar to this:
GET 404 1.04 KB24 ms Chrome 91 https://bf-dev3-u7raxlu3nq-ew.a.run.app/web/content/290-f328144/1/website.assets_editor.css
My interpretation is that, as cloud run is stateless, and the web static files seems to be stored in the core module, after the container is killed this information is lost. As I've been one month working looking for a solution, before trying any another way of deploying I ask the community: Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?
Here I listed all the ideas that I tried:
First, I thought that this css files were store in the werkzeug session, so I tried two addons that stored this session in a place different from the filestore. These addons were camptocamp odoo-cloud-platform-14.0/session-redis and misc-addons-13.0/base_session_store_psql. But, then the problem persisted.
Then I read that the static css and js file generated in the web editor are stored in odoo as attachments, and the addons misc-addons-13.0/ir_attachment_s3 could store these files in s3. But, although I configured this addon the problem persisted.
Next, I found this link describing needing to regenerate assets so them to be stored in the db. But, although I did that the problem persisted.
Finally, I thought to deploy odoo in other ways. The way of directly in a vm seems to be the more minimalistic and standard, and so seem to have the more chances to work, although it will be difficult to implement gitops. It can be deployed containers in the vm through docker compose what will help deploying updates. Gke anthos seems to implement gitops too and seems to persist volumes, but in the description it shows gke anthos is stateless. Finally, there's the way of deploying in a k8s cluster, this way will implement containers and allow autoscaling vs the docker compose way in a vm. But it's true it seems to be more expensive and more difficult to implement. Regarding seem to be more expensive it is thought of trying little working nodes machines so the cost stays small during the night. Regarding the difficulty of deploying, it is desired to implement gitops so it seems argo or other should be added. Also, I heard gke autopilot has a good free tier and is easier to deploy.
Thanks in advance :)
Cloud Run isn't the good solution for that. Indeed, if the werkzeug session is persisted in memory, the same client isn't sure to access to the same instance each time, and thus to lost the file even in the middle of a session.
The best solution is to use VM with sticky session configuration. You can use old school deployment on Compute Engine, or Cloud Native solution with GKE/K8S. It's more or less the same cost if you have only 1 cluster (the first one is free)
Just a correction about GKE Anthos. I think you talk about Cloud Run on Anthos, and yes, it's like Cloud Run but use KNative on GKE to manage the containers, and it's also serverless. But GKE can handle stateful deployment, as you need with odoo

How to use Pentaho Community API

I have a pentaho community server 8.1 already running and i would like to know if this version has a API avaliable? Im using the following code to and getting a 200 but there is no basic so i can authenticate correctly.
import requests
data = {"j_username": "admin","j_password":"password"}
r = requests.post('http://(serverip):8080/pentaho/j_spring_security_check', data = data)
Is the authentication for this api configurable?
The whole idea is that i can use the scheduler since the spoon for the community version doest has access to it BUT i tried using the enterprice client and i was able to schedule so the module is there you just cant reach it.
Thanks!
Yes, and the full API is documented here:
https://help.pentaho.com/Documentation/8.1/Developer_Center/REST_API
Note: If you want to enable username/password authentication on the URL you have to edit security.properties and reboot . (an older insecure approach, but for the purposes of development possibly simpler to get you going)
You're absolutely right - the CORE platform does have the functionality, just not the UI so you're more than welcome to use the API to use the scheduler engine.

How to separate debug and release builds with respect to their api destinations in React Native

Originally I ran a local server on my PC in order to make my django REST api available for my React Native app to reach out to through my computer's ip. So I had a base url hardcoded into my js network utilities as http://10.0.0.xxx:8000/api/ which I used as the basis for all my network calls. Recently I deployed my backend to Heroku so that I could demonstrate my app when away from my computer. So for now I just made a second hardcoded base url of https://my-cool-app.heroku.com/api/ which I manually flip back and forth between in my js code depending on if I want to use my local server (for debugging while devving) or the remote server for demonstration (and by "manually flip back and forth", I mean I literally change my code to point to one or the other).
I understand this is a terrible way to go about things and that I'm missing some major pieces to the puzzle that probably apply not just to RN projects but to most full stack projects where the frontend and backend are not hosted on the same server. I know I can look for the __DEV__ flag to see if I'm working in a debug or release version, but then would I have to keep two versions of the app on my phone somehow? Also, does it even make sense to keep my base urls hanging around on the front end, or should they be dispensed from the backend in some way instead?
I personally use :
https://github.com/zetachang/react-native-dotenv
for my environment variables like my backend api and other configs based on the env.
Since it's similar to many backend libs like django or laravel, i absolutely love this library for managing environment variables :)

Worklight Analytics, Native Java API, no messages in dashboard

Worklight 6.2.0
Native Worklight App on Samsung Galaxy S4, Android 4.4.2
WLAnalytics.enable();
WLAnalytics.log("some text", new org.json.JSONOBject() );
WLAnalytics.send();
// and also go on to successfully call an adapter
Analytics Dasboard shows the app version and adapter activity. Log Search does not show any application log messages and the dropdown for selecting applications shows "All Applications" only, no sign of my app.
Have I missed some initialisation step? Any other ideas?
** edited to add **
It has been suggested that we should use the method:
WLAnalytics.log("some text");
In our 6.2.0.00 CLI environment there is no such Java method.
The answer is that there a further initialisation requirement that seems to be necessary when working with a pure Native application, these are typically build using the Worklight CLI tooling.
This is the initialisation, note the call to Logger.setContext()
WLAnalytics.enable();
Logger.setContext(this);
Then this works
WLAnalytics.log("My test message2", new org.json.JSONObject());
It's worth noting that the call to WLAnalytics.send() is not necessary in normal running as typically the analytic data is buffered and sent as a piggy-back on adapter calls. However while testing a call to send() does help.
Further, if running in an environment where the Analytics WAR is on a separate machine from the Worklight Server WAR there are additional latencies. Hence testing all of this needs care.
For now, I suggest that you just use the WLAnalytics.log(String) method. There are some clear inconsistencies that need to be dealt with whether it be through documentation or code fixes.
The WL.Logger APIs were originally created to send log data to a custom adapter, which is why they take a dictionary/object for extra metadata. The data sent to the custom adapter could be read as a valid JSON object to run operations on the adapter.
The WL.Analytics APIs mimicked the WL.Logger APIs for the same purpose: parsing the JSON on a worklight adapter. The Operational analytics server came as a convenience to intercept and display some of these logs, but not all of them are being captured as you have learned.
Your questions are all valid though, as none of this is described in the documentation. In future releases, we may make use of the extra JSON object passed into the API in the Operational analytics console, but for right now they only serve their original purpose of sending the analytics to a custom adapter.

IBM Worklight - is Direct Update allowed by Apple's guidelines for the App Store?

I read about Worklight's Direct Update feature already. However, I still have some questions that would like to clarify:
Q1: Is it true that Apple allows Worklight Apps to be published to APP
Store even there is a direct update feature?
Q2: How will Apple review and monitor the Worklight Apps' content if
there is a huge change after the direct update? Or, Apple does not
worry about the cached web resource in the application, does it?
Q3: Is there any limitation or pre-condition about the direct update
for the web resource? For example, the major entries of html and js
script files must be existed... etc.
Q1: Is it true that Apple allows Worklight Apps to be published to APP Store even there is a direct update feature?
A1: There are existing Worklight customers that have submitted an application to the App Store and passed Apple's app submission process. For best results, make sure you use Worklight v5.0.6.1 or later.
Q2: How will Apple review and monitor the Worklight Apps' content if there is a huge change after the direct update? Or, Apple does not worry about the cached web resource in the application, does it?
A2: Apple only reviews app submissions to the App Store and whether or not they follow their guidelines. They do not review future updates to the application (as long as it was not re-submitted), for example in the form of a Direct Update unless there are some extra-ordinary circumstances (like inappropriate content that was discovered afterwards, for example...)
Q3: Is there any limitation or pre-condition about the direct update for the web resource? For example, the major entries of html and js script files must be existed... etc.
A3: I am not entirely sure I understand the question. There is no limitation in Direct Update - this feature replaces the existing web resources of an application with new ones. The only thing I can think of is that both the Worklight Studio (that the app was created on) and Worklight Server (that the app lives on) must be of the same version number.
An update.
Apple now allows code updates if you use a webview
3.3.2 An Application may not download or install executable code. Interpreted code may only be used in an Application if all scripts,
code and interpreters are packaged in the Application and not
downloaded. The only exception to the foregoing is scripts and code
downloaded and run by Apple's built- in WebKit framework, provided
that such scripts and code do not change the primary purpose of the
Application by providing features or functionality that are
inconsistent with the intended and advertised purpose of the
Application as submitted to the App Store.