Publishing of multiple Angular apps to Cloudflare workers with wrangler - cloudflare

I'm new to CF workers and the wrangler publish system, and I can find very little information around my requirements within online sources, perhaps my search query is wrong, so hoping I can find some help here.
I have an NX workspace, containing 2x apps. One app is deployed into the top-level worker, and the second one should be deployed to a sub-directory in the same worker, effectively create a parent-child structure, like the following:
example.com/ -> top-level app
example.com/site2/ -> child-level app
My issue is, I do not understand where and how to define, in wrangler.toml, the /sub-directory/. Should I have 2x separate worker-sites for these? I was under the impression that, I could just update the worker (index.js) file in my single worker-site to handle /site2/ otherwise treat the request as standard?
All I would really like to know is, how can I specify that my publish should to the /site2/ sub-directory, if at all possible?
Thanks in advance.

There are a couple ways to handle this. If your code / logic in the workers for the top-level vs child-level is completely different, I'd recommend using two separate workers. Then you can configure which "routes" each worker will run on -
https://developers.cloudflare.com/workers/cli-wrangler/configuration
Worker 1 could be -
routes = ["example.com/"]
Worker 2 could be -
routes = ["example.com/site2/"]
Check this out for more details on how routing / matching behaves -
https://developers.cloudflare.com/workers/platform/routes#matching-behavior
The other way to do it would be to have a single worker, and inspect the incoming request to behave differently depending on whether the request is at the root, or at /site2/. I'd only recommend this if there are small differences between how the two sites should behave (e.g. swapping out a variable).

Related

Reuse Geb Pages for multiple domains

I'd like to test the search functionality of 30 websites that are generated by the same CMS under different domains with different Lucene-indexes. For this purpose I'd like to write a single Page Object which I'd like to be fed by the configuration with those 30 different baseUrls.
I'd be running those tests in the same environment so I'm not sure how to approach this issue. Is there anything I've been missing so far? Looking forward to a push in the right direction and thanks in advance.
You can always override the base url in your test using using browser.config.baseUrl = 'http://example.com'. It will be reverted to the value configured in GebConfig.groovy for the next test.
The question is how many tests do you wish to run this way? If it's just one test then you can get away by using Spock's support for parametrised tests with where: block and this approach. If it's more than one test then you will probably be looking at custom test runners or running your tests multiple times using your build system with different geb environment settings.

Support for multiple environments in your windows store app

I have been working on a Windows Store app where I have to support multiple configuration parameters for my app. One of the parameters is the URL the app is talking to.
For example development environment, test, acceptance and finally production.
One of the things i'm currently thinking about is what is the most efficient way of supporting all these environments with the least effort. Because there isn't some kind of config file that we can change to update these parameters I came up with some ideas. I'm curious about other options that I might have not seen.
Here are the things I came up with:
1
Adding multiple configuration to the app and than using them in code to get the correct parameter like this:
private string webserviceUrl;
#if DEV
webserviceUrl = "devUrl";
#elif TEST
webserviceUrl = "testUrl";
#endif
2
With the approach in number 1 there are a few more options available like including a config xml file bases on the configuration, or fetching configuration settings from a webservice the first time the app is running.
3
Using a branch/merge strategy and update the config files in the branch. Advantage is that the code is clean and only contains the settings it needs for the build it's created for. And the package can be build by the build server. Disadvantage is that you need to branch/merge alot.
The last option feels like the most 'clean' solution to do this. Am I missing any options, or do you have experience with any of these methods? What would you prefer?
I think the assumption is that apps in the store will always point to production.
But, in saying that, I'm facing the same issue as we're side loading the application onto devices that we control, and not using the Windows Store at all.
To answer your question, I prefer option 1.
Option 2 and the xml/json config file seems like the best option though.
The webservice option probably won't work. What webservice URL do you use? And how will it work if you want some instances pointing to different environments as they will all be fetching the config from the same URL.
Another option you might want to consider would be options in the settings charm menu. For example, use radio buttons for the environments, and allow the user to configure which environment they want to target.
The issue would be locking it down in production for end users so that it isn't modifiable any more. Perhaps once "PROD" radio is selected, all the radio buttons are then hidden.
If you're deploying the application through side loading, then these settings could probably be configured during the install process.
I'd be interested to hear other opinions as well. This is also an old question, so I'd like to know what solution you decided on implementing.

Restlet static content served from multiple sources

My application needs to be able to serve up static content which can be contained in a number of different places (directories and/or via the class loader). So, for example, a resource /static/file.html might be found in /dir1/file.html or /dir2/file.html; I would want it to try /dir1, and if not found there, then /dir2, and so on.
With servlets in Jetty, I can use either a HandlerList of DefaultServlet, to sequentially try to handle the request from each directory until satisfied, or even easier a single DefaultServlet with a ResourceCollection.
I can't see a way to do something similar in restlet, without writing a class to specifically do this. I could modify Directory to handle multiple sources (in a similar way to DefaultServlet with ResourceCollection), or write a new Restlet which tries each contained Restlet sequentially, until successfully handled (like HandlerList). But before I do that, am I missing another way that already exists to achieve this?
thanks,
Stuart
I confirm that Directory doesn't know how to handle multiple source directories. It would be a nice to add support for this and contribute it back.

Need guidance in creating Rails 3 Engine/Plugin/Gem

I need some help figuring out the best way to proceed with creating a Rails 3 engine(or plugin, and/or gem).
Apologies for the length of this question...here's part 1:
My company uses an email service provider to send all of our outbound customer emails. They have created a SOAP web service and I have incorporated it into a sample Rails 3 app. The goal of creating an app first was so that I could then take that code and turn it into a gem.
Here's some of the background: The SOAP service has 23 actions in all and, in creating my sample app, I grouped similar actions together. Some of these actions involve uploading/downloading mailing lists and HTML content via the SOAP WS and, as a result, there is a MySQL database with a few tables to store HTML content and lists as a sort of "staging area".
All in all, I have 5 models to contain the SOAP actions (they do not inherit from ActiveRecord::Base) and 3 models that interact with the MySQL database.
I also have a corresponding controller for each model and a view for each SOAP action that I used to help me test the actions as I implemented them.
So...I'm not sure where to go from here. My code needs a lot of DRY-ing up. For example, the WS requires that the user authentication info be sent in the envelope body of each request. So, that means each method in the model has the same auth info hard coded into it which is extremely repetitive; obviously I'd like for that to be cleaner. I also look back now through the code and see that the requests themselves are repetitive and could probably be consolidated.
All of that I think I can figure out on my own, but here is something that seems obvious but I can't figure out. How can I create methods that can be used in all of my models (thinking specifically of the user auth part of the equation).
Here's part 2:
My intention from the beginning has been to extract my code and package it into a gem incase any of my ESP's other clients could use it (plus I'll be using it in several different apps). However, I'd like for it to be very configurable. There should be a default minimal configuration (i.e. just models that wrap the SOAP actions) created just by adding the gem to a Gemfile. However, I'd also like for there to be some tools available (like generators or Rake tasks) to get a user started. What I have in mind is options to create migration files, models, controllers, or views (or the whole nine yards if they want).
So, here's where I'm stuck on knowing whether I should pursue the plugin or engine route. I read Jordan West's series on creating an engine and I really like the thought of that, but I'm not sure if that is the right route for me.
So if you've read this far and I haven't confused the hell out of you, I could use some guidance :)
Thanks
Let's answer your question in parts.
Part One
Ruby's flexibility means you can share code across all of your models extremely easily. Are they extending any sort of class? If they are, simply add the methods to the parent object like so:
class SOAPModel
def request(action, params)
# Request code goes in here
end
end
Then it's simply a case of calling request in your respective models. Alternatively, you could access this method statically with SOAPModel.request. It's really up to you. Otherwise, if (for some bizarre reason) you can't touch a parent object, you could define the methods dynamically:
[User, Post, Message, Comment, File].each do |model|
model.send :define_method, :request, proc { |action, params|
# Request code goes in here
}
end
It's Ruby, so there are tons of ways of doing it.
Part Two
Gems are more than flexible to handle your problem; both Rails and Rake are pretty smart and will look inside your gem (as long as it's in your environment file and Gemfile). Create a generators directory and a /name/name_generator.rb where name is the name of your generator. The just run rails g name and you're there. Same goes for Rake (tasks).
I hope that helps!

Class design for serialization - ideas or patterns?

Let me begin with an illustrative example (assume the implementation is in a statically typed language such as Java or C#).
Assume that you are building a content management system (CMS) or something similar. The data is hierarchically organised into Folders. Each folder has a collection of children; a child may be a Page or a Folder. All items are stored within a root folder. No cycles are allowed. We have an acyclic graph.
The system will have a remote API and instances of Folder and Page must be serialized / de-serialized across the network. With a typical implementation of folder, in which a folder's children are a List, serialization of the root node would send the entire graph. This is unacceptable for obvious reasons.
I am interested to hear people have solved this problem in the past.
I have two potential suggestions:
Navigation by query: Change the domain model so that the folder class contains only a list of IDs for each child. To access a child we must query for it. Serialisation is now trivial since the graph ends at a well defined point. The major downside is that we lose type safety - the ID could be for something other than a folder/child.
Stop and re-attach: During serialization stop whenever we detect a reference to a folder or page, send the ID instead. When de-serializing we must then look up the corresponding object for each ID and re-attach it at the relevant position in the nascent object.
I don't know what kind of API you are trying to build, but your suggestion #1 sounds like it is close to what is recommended for REST style services and APIs. Basically, a Folder object would contain a list of URLs to its children.
The Navigation by query solution was used for NFS. By reading through your question, it looks to me, as if you're trying to implements kind of a file system yourself.
If you're looking specifically into sending objects over the network there is always CORBA. Aside from that there is DCOM and the newer WCF. But wait there is more like RMI. Furthermore there are Web Services. I'll stop here now.
Suppose You model the whole tree with every element being a Node, specialisations of Node being Folder and, umm, Leaf. You have a "root" Node. Nodes have a methods
canHaveChildren()
getChildren()
Leaf nodes have the obvious behaviours (never even need to hit the network)
Folders getChildren() get the next set of nodes.
I did devise a system with Restful services along these lines. Seemed to be reasonably easy to program to.
I would not do it by the Navigation by query method. Simply because I would like to stick with the domain model where folders contains folders or pages.
Customizing the serialization might also be tricky, bug prone and difficult to change\understand.
I would suggest that you introduce and object like FolderBowser in your model which takes an id and gives you a list of contents of the folder. That will make your service operations simpler.
Cheers,
Unmesh
The classical solution is probably to use a proxy pattern, where some of the graph is sent over the network and some of the folders are replaced by proxies that will not have their lists of children populated until they are queried. A round trip to the server takes a significant amount of time and it will probably result in too many requests if all folders are proxies (this would yield a new request each time the contents of a folder is inspected), so you want to go for some trade off between the size of each chunk of data and the number of server requests needed in a typical scenario. This is of course application specific, but sending the contents of all child folders in for instance depth 2 might be a useful strategy...
Long story short: What will probably work best is your solution #1 with the exception that you want to send more than one folder at a time because of the overhead of a round trip to the server...