How to setup a project and break it into sub-projects, how to use slick in this setup - intellij-idea

This is a brand new project, so I can use the latest version of play.
I am using IntelliJ 13.
So I want to break the models/db/service layer because I will also have a job service (reading messages off a queue for example) that will need this server layer also.
Since slick is outside of play, how do I setup the datasource for this project, keeping in mind I will be connecting to multiple databases.
Do I need to create a custom config file for this?
web-app (play2!)
- service
service (models + dao)
models
dao
jobs (service)
I don't see any examples like this, which I find strange because I think pretty much any project would have to be setup this way in the real world (beyond simple examples).
Can someone show be sample code where things are broken down like this?

This example isn't broken into sub-projects, but it is very split up and would allow you to specify multiple databases.
https://github.com/geigerma/play-cake

Related

Best Practice for deploying ADF pipelines

I am brand new to ADF and am creating my very first data factory. I am using the UI option (if anyone can point me to any documents for using code I'd be most grateful).
I will have 3 different environments - dev/test/prod. Each of these have got slightly different configs (yes I know!). So my datasets and linked services will need to change for each environment. What is the best way to do this? How would you approach this?
(p.s: We also have BitBucket and Jenkins/Octopus for CI/CD, so ideally would like to create scripts to automate this if possible.)
Thank you
You can create data factory using code. You can find code with detailed information here
There are 2 approach to deploy ADF pipeline.
ARM template
Custom approach (Json files, via REST API) - With this approach, we can fully automate CI/CD process as collaboration branch will be our source for deployment. This is the reason why the approach is also known as (direct) deployment from code (JSON files).
Refer this blog by Kamil Nowinski
Scope of the question is broad. But, this video by Mohamed Radwan practically shows how you can deploy and manage 3 different environments i.e. ADF-DEV, ADF-PROD and ADF-UAT.

Sharing HDI container in MTA XSA application

I have some experience developing in the xs classical apps in the SAP Cloud platform neo evironment. And plan to migrate to xsadvanced in the cloud foundary landscape. I have a few fundamental doubts which sadly i could not get answers to even after reading the documentation.
If i understand correctly XSA is cloud foundary extented by SAP to support SAP HANA 2.0 as a service via the HDI Container, allowing the tradional xsjs to run as a node.js container and bunch of other SAP specefic serivces ?
the MTA (mutlti target application) development promoted by SAP looks neat however i have a few questions when it comes to working in a env where multiple developers work with the same MTA in the "dev" space for example.
the typical MTA is combined of a web module where the UI part resides , node module to hold the services and a db module that is the entire HDI container which holds tables,views and the actual data .
the developers dont work with WebIde and use VSS code and cf CLI.
Question 1 : if i want each developer to have an isolated MTA when developing , then each developer must push the same MTA app with a different name when he works with works for some feature development ? (preferably some feature brach that gets pushed as a new app) . every line of code change then needs a push to CF
Question 2: Pushing the app with the same schema name in MTA file creates a new schema for the second developer in the same HDI cointainer (i am not sure if this correct however this what i understand from : here
But the second schema will remain empty and many not contain data, do we then take care about data replication from schema 1 to schema 2. wont this explode the space usage ?
As i said, i did not find documentation about how multiple developers can work on the same MTA app in the shared space so any guidance will help
thank you
Multiple developers are able to work on the same MTA app, because of git functionality and different workspaces dedicated for each developer. More to that, you can even personally have even more that one workspace, which means you can have different features developed by yourself separately.
When you build your container or run your application in your space, each time get a unique schema name or application url, this means you can work independently.
A bit tricky part is when you want to join your commited code with other developers. Basically, it depends of how big your project is. If you have a large project, it's better to control merges using Code Review and Unit tests, or maybe you are in a small project and work on different files, which means you can merge your code easily to master branch by yourself.

Wanting to get rid of the Test Hub in TFS and integrate with Test Rail

Just wondering if there is a way to integrate TFS with TestRail to replace(get rid of entirely) the Test Hub within TFS to use TestRail to record Test Plans?
My concern with removing the Test Hub, would be if Test Rail can still reference IDs in Bug and Stories within TFS and vice versa?
You currently cannot remove the standard Hubs in Visual Studio Team Services / TFS or replace them with something else. You can enable an extension that either adds its own functionality under said hub as a separate tab or adds another Test Rail hub to the top level (if there is any) or write your own. Extensions currently cannot leave their sandbox to overwrite standard functionality.
There is nothing preventing anything from keeping track of work item numbers anywhere, so the second part of your question, whether it would have broken any form of integrations, that's unlikely.
If you are on TFS, you could try creating a custom process template that doesn't have the Test Case, Shared Step, Test Suite and Test Plan work item types, this will likely at least completely cripple the existing functionality. In the on-prem version you can also customize the files on disk, I've never tried, but it's likely that you could probably hack the test hub away. That would be a totally unsupported scenario through.

Composite C1 - develop locally, sync to live site

I have a couple of Composite C1 CMS websites.
To edit them currently I use the web based CMS on the live site.
However - I would like to update the (code & content) in Visual Studio locally - then sync to the web. However, if my local copy is older than that online (e.g. a non techy client has edited something on the live site) and I Web Deploy - it will go over the top of the new file on the server.
I need a solution that works out the newest change? I can't find anything in Google or the C1 docs.
How can I sync - preferably using Web Deploy. Do I need some kind of version control?
Is there a best practice for this - editing the live site through the web interface seems a bit dicey & is slow.
The general answer to this type of scenario seems to be to use the Package Creator. With that you can develop locally, add the files you've changed to a package, and install that package on a live site. This solution does not at all cover all the parts of you question though, and has certain limitations:
You cannot selectively add content to a package. It's all pages or no pages.
Adding datatypes is easy, but updating them later requires you to delete the datatype (and data), and recreate the datatype.
In my experience packages works well for incremental site updates, if you limit the packages content to be front end stuff, like css, images and such.
You say you need a solution that works out the newest changes - I believe the only solution to this is yourself, with the aid of some tooling. I don't think there's a silver bullet solution here.
Should you use a version control system? Yes! By all means. Even if you are not sharing your code with anyone, a VCS is a great way to get to know Composite C1 from a file system perspective, as you can carefully track what files are changed on disk, as you develop. This knowledge is crucial when you want to continuously add features the a website that is already alive and kicking - you need to know what to deploy, and what not to touch.
Make sure you read the docs on how Composite fits in VCS: http://docs.composite.net/Configuration/C1-and-Version-Control
I assume that your sites are using the XML data storage (if you where using SQL Data Store, your content would not be overridden upon sync).
This means that your entire web application lives in one folder on disk on the web server, which can be an advantage here.
I'll try to outline a solution that could work for you, although I must stress that I've never tried this - I'm making it up as I type.
Let's say you're using git, download the site in it's entirety from the production web server, and commit the whole damned thing* to your master branch.
Then you create a new feature branch from that commit, and start making the changes you want to deploy later, and carefully commit your work as you go along, making sure you only commit the changes that are needed for your feature to work, to the feature branch.
Now, you are ready to deploy, and you switch back the master branch, and again download the entire site and commit it to master.
You then merge your feature branch into the master branch, and have git do all the hard work of stitching you changes in with the changes from the live site. There are bound to be merge conflicts, and that is where you will have to jump in, and decide for yourself what content needs to go live.
After this is done and tested, you can web deploy the site up to the production environment.
Changes to the live site might have occurred while you where merging, so consider closing the site, or parts of it, during this process.
If you are using SQL Data Store i suggest paying for a tool like Red Gate's SQL Compare and SQL Data Compare or SQL Delta, to compare your dev database to the production database, and hand pick SQL scripts that can be applied to the production database along with your feature deployment.
'* Do consider using a .gitignore file to avoid committing certain files - refer to the docs for mere info.
I suppose you should use the Package Creator
Also have a look here: http://docs.composite.net/Configuration/C1-and-Version-Control

Website data retrieval

An recent article has prompted me to pick up a project I have been working on for a while. I want to create a web service front end for a number of sites to allow automated completion of forms and data retrieval from the results, and other areas of the site. I have acheived a degree of success using Selenium and custom code however I am looking to extend this to a stage where adding additional sites is a trivial task (maybe one which doesn't require a developer even).
The Kapow web data server looks to achieve a lot of this however I am told it is quite expensive (currently awaiting a quote). Has anyone had experience with this, or can suggest any alternatives (Open Source ideally)?
Disclaimer: I realise the potential legality issues around automating data retrieval from 3rd party websites - this tool is designed to be used in a price comparison system and all of the websites integrated with it will be done with the express permission of the owners. Where the sites provide an API this will clearly be the favoured approach.
Thanks
Realised it's been a while since I posted this, however should anyone come across it, I have had lots of success in using the WSO2 framework (particularly the mashup server) for this. For data mining tasks I have also used a Java library that this wraps - webharvest - which has achieved everything I needed