Using ImageFlow Server with multiple Azure Containers - imageresizer

I am currently evaluating ImageFlow Server (https://github.com/imazen/imageflow-dotnet-server) to determine if it will meet the needs of a project that I am working on. Working through the documentation, I was able to get the ImageFlow Server connected to Azure Storage using the following:
public void ConfigureServices(IServiceCollection services)
{
services.AddImageflowAzureBlobService(
new AzureBlobServiceOptions("[MY CONNECTION STRING TO AZURE STORAGE]",
new BlobClientOptions())
.MapPrefix("/azure", "[CONTAINER No. 1]"));
}
This works without issue and I can see images as expected. Current requirements for the project requires that each user will have a unique container though, which makes the implementation above impossible.
Is there a way to pass the container name along with the file name when making a request? Something like: '/azure/CONTAINER/image.jpg?w=250'

We have an example provider to do exactly that here: https://github.com/imazen/imageflow-dotnet-server/blob/main/examples/Imageflow.Server.Example/CustomBlobService.cs
// Custom blob services can do whatever you need. See CustomBlobService.cs in src/Imageflow.Service.Example
services.AddImageflowCustomBlobService(new CustomBlobServiceOptions()
{
Prefix = "/custom_blobs/",
IgnorePrefixCase = true,
ConnectionString = "UseDevelopmentStorage=true;",
// Only allow 'my_container' to be accessed. /custom_blobs/my_container/key.jpg would be an example path.
ContainerKeyFilterFunction = (container, key) =>
container == "my_container" ? Tuple.Create(container, key) : null
});

Related

Spring Cloud Server serving multiple property files for the same application

Lets say I have applicationA that has 3 property files:
-> applicationA
- datasource.properties
- security.properties
- jms.properties
How do I move all properties to a spring cloud config server and keep them separate?
As of today I have configured the config server that will only read ONE property file as this seems to be the standard way. This file the config server picks up seems to be resolved by using the spring.application.name. In my case it will only read ONE file with this name:
-> applicationA.properties
How can I add the other files to be resolved by the config server?
Not possible in the way how you requested. Spring Cloud Config Server uses NativeEnvironmentRepository which is:
Simple implementation of {#link EnvironmentRepository} that uses a SpringApplication and configuration files located through the normal protocols. The resulting Environment is composed of property sources located using the application name as the config file stem (spring.config.name) and the environment name as a Spring profile.
See: https://github.com/spring-cloud/spring-cloud-config/blob/master/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/environment/NativeEnvironmentRepository.java
So basically every time when client request properties from Config Server it creates ConfigurableApplicationContext using SpringApplicationBuilder. And it is launched with next configuration property:
String config = application;
if (!config.startsWith("application")) {
config = "application," + config;
}
list.add("--spring.config.name=" + config);
So possible names for property files will be only application.properties(or .yml) and config client application name that is requesting configuration - in your case applicationA.properties.
But you can "cheat".
In config server configuration you can add such property
spring:
cloud:
config:
server:
git:
search-paths: '{application}, {application}/your-subdirectory'
In this case Config Server will search for same property file names but in few directories and you can use subdirectories to keep your properties separate.
So with configuration above you will be able to load configuration from:
applicationA/application.properies
applicationA/your-subdirectory/application.properies
This can be done.
You need to create your own EnvironmentRepository, which loads your property files.
org.springframework.cloud.config.server.support.AbstractScmAccessor#getSearchLocations
searches for the property files to load :
for (String prof : profiles) {
for (String app : apps) {
String value = location;
if (app != null) {
value = value.replace("{application}", app);
}
if (prof != null) {
value = value.replace("{profile}", prof);
}
if (label != null) {
value = value.replace("{label}", label);
}
if (!value.endsWith("/")) {
value = value + "/";
}
output.addAll(matchingDirectories(dir, value));
}
}
There you could add custom code, that reads the required property files.
The above code matches exactly the behaviour described in the spring docs.
The NativeEnvironmentRepository does NOT access GIT/SCM in any way, so you should use
JGitEnvironmentRepository as base for your own implementation.
As #nmyk pointed out, NativeEnvironmentRepository boots a mini app in order to collect the properties by providing it with - sort of speak - "hardcoded" {appname}.* and application.* supported property file names. (#Stefan Isele - prefabware.com JGitEnvironmentRepository ends up using NativeEnvironmentRepository as well, for that matter).
I have issued a pull request for spring-cloud-config-server 1.4.x, that supports defining additional file names, through a spring.cloud.config.server.searchNames environment property, in the same sense one can do for a single springboot app, as defined in the Externalized Configuration.Application Property Files section of the documentation, using the spring.config.name enviroment property. I hope they review it soon, since it seems many have asked about this feature in stack overflow, and surely many many more search for it and read the currently advised solutions.
It worths mentioning that many ppl advise "abusing" the profile feature to achieve this, which is a bad practice, in my humble opinion, as I describe in this answer

Setting user credentials on aws instance using jclouds

I am trying to create an aws instance using jclouds 1.9.0 and then run a script on it (via ssh). I am following the example locate here but I am getting authentication failed errors when the client (java program) tries to connect at the instance. The AWS console show that instance is up and running.
The example tries to create a LoginCrendentials object
String user = System.getProperty("user.name");
String privateKey = Files.toString(new File(System.getProperty("user.home") + "/.ssh/id_rsa"), UTF_8);
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
which is latter used from the ssh client
responses = compute.runScriptOnNodesMatching(
inGroup(groupName), // predicate used to select nodes
exec(command), // what you actually intend to run
overrideLoginCredentials(login) // use my local user & ssh key
.runAsRoot(false) // don't attempt to run as root (sudo)
.wrapInInitScript(false));
Some Login information are injected to the instance with following commands
Statement bootInstructions = AdminAccess.standard();
templateBuilder.options(runScript(bootInstructions));
Since I am on Windows machine the creation of LoginCrendentials 'fails' and thus I alter its code to
String user = "ec2-user";
String privateKey = "-----BEGIN RSA PRIVATE KEY-----.....-----END RSA PRIVATE KEY-----";
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
I also to define the credentials while building the template as described in "EC2: In Depth" guide but with no luck.
An alternative is to build instance and inject the keypair as follows, but this implies that I need to have the ssh key stored in my AWS console, which is not currently the case and also breaks the functionality of running a script (via ssh) since I can not infer the NodeMetadata from a RunningInstance object.
RunInstancesOptions options = RunInstancesOptions.Builder.asType("t2.micro").withKeyName(keypair).withSecurityGroup(securityGroup).withUserData(script.getBytes());
Any suggestions??
Note: While I am currently testing this on aws, I want to keep the code as decoupled from the provider as possible.
Update 26/10/2015
Based on #Ignasi Barrera answer, I changed my implementation by adding .init(new MyAdminAccessConfiguration()) while creating the bootInstructions
Statement bootInstructions = AdminAccess.standard().init(new MyAdminAccessConfiguration());
templateBuilder.options(runScript(bootInstructions));
Where MyAdminAccessConfiguration is my own implementation of the AdminAccessConfiguration interface as #Ignasi Barrera described it.
I think the issue relies on the fact that the jclouds code runs on a Windows machine and jclouds makes some Unix assumptions by default.
There are two different things here: first, the AdminAccess.standard() is used to configure a user in the deployed node once it boots, and later the LoginCredentials object passed to the run script method is used to authenticate against the user that has been created with the previous statement.
The issue here is that the AdminAccess.standard() reads the "current user" information and assumes a Unix System. That user information is provided by this Default class, and in your case I'm pretty sure it will fallback to the catch block and return an auto-generated SSH key pair. That means, the AdminAccess.standard() is creating a user in the node with an auto-generated (random) SSH key, but the LoginCredentials you are building don't match those keys, thus the authentication failure.
Since the AdminAccess entity is immutable, the better and cleaner approach to fix this is to create your own implementation of the AdminAccessConfiguration interface. You can just copy the entire Default class and change the Unix specific bits to accommodate the SSH setup in your Windows machine. Once you have the implementation class, you can inject it by creating a Guice module and passing it to the list of modules provided when creating the jclouds context. Something like:
// Create the custom module to inject your implementation
Module windowsAdminAccess = new AbstractModule() {
#Override protected void configure() {
bind(AdminAccessConfiguration.class).to(YourCustomWindowsImpl.class).in(Scopes.SINGLETON);
}
};
// Provide the module in the module list when creating the context
ComputeServiceContext context = ContextBuilder.newBuilder("aws-ec2")
.credentials("api-key", "api-secret")
.modules(ImmutableSet.<Module> of(windowsAdminAccess, new SshjSshClientModule()))
.buildView(ComputeServiceContext.class);

Updating Data Source Login Credentials for SSRS Report Server Tables

I have added a lot of reports with an invalid data source login to an SSRS report sever and I wanted to update the User Name and Password with a script to update it so I don't have to update each report individually.
However, from what I can tell the fields are store as Images and are encrypted. I can't find anything out about how they are encrypted or how to update them. It appears that the User Name and password are stored in the dbo.DataSource tables. Any ideas? I want the script to run in SQL.
Example Login Info:
I would be very, very, VERY leery of hacking the Reporting Services tables. It may be that someone out there can offer a reliable way to do what you suggest, but it strikes me as a good way to clobber your entire installation.
My suggestion would be that you make use of the Reporting Services APIs and write a tiny app to do this for you. The APIs are very full-featured -- pretty much anything you can do from the Report Manager website, you can do with the APIs -- and fairly simple to use.
The following code does NOT do exactly what you want -- it points the reports to a shared data source -- but it should show you the basics of what you'd need to do.
public void ReassignDataSources()
{
using (ReportingService2005 client = new ReportingService2005)
{
var reports = client.ListChildren(FolderName, true).Where(ci => ci.Type == ItemTypeEnum.Report);
foreach (var report in reports)
{
SetServerDataSource(client, report.Path);
}
}
}
private void SetServerDataSource(ReportingService2005 client, string reportPath)
{
var itemSources = client.GetItemDataSources(reportPath);
if (itemSources.Any())
client.SetItemDataSources(
reportPath,
new DataSource[] {
new DataSource() {
Item = CreateServerDataSourceReference(),
Name = itemSources.First().Name
}
});
}
private DataSourceDefinitionOrReference CreateServerDataSourceReference()
{
return new DataSourceReference() { Reference = _DataSourcePath };
}
I doubt this answers your question directly, but I hope it can offer some assistance.
MSDN Specifying Credentials
MSDN also suggests using shared data sources for this very reason: See MSDN on shared data sources

SessionFactory - one factory for multiple databases

We have a situation where we have multiple databases with identical schema, but different data in each. We're creating a single session factory to handle this.
The problem is that we don't know which database we'll connect to until runtime, when we can provide that. But on startup to get the factory build, we need to connect to a database with that schema. We currently do this by creating the schema in an known location and using that, but we'd like to remove that requirement.
I haven't been able to find a way to create the session factory without specifying a connection. We don't expect to be able to use the OpenSession method with no parameters, and that's ok.
Any ideas?
Thanks
Andy
Either implement your own IConnectionProvider or pass your own connection to ISessionFactory.OpenSession(IDbConnection) (but read the method's comments about connection tracking)
The solution we came up with was to create a class which manages this for us. The class can use some information in the method call to do some routing logic to figure out where the database is, and then call OpenSession passing the connection string.
You could also use the great NuGet package from brady gaster for this. I made my own implementation from his NHQS package and it works very well.
You can find it here:
http://www.bradygaster.com/Tags/nhqs
good luck!
Came across this and thought Id add my solution for future readers which is basically what Mauricio Scheffer has suggested which encapsulates the 'switching' of CS and provides single point of management (I like this better than having to pass into each session call, less to 'miss' and go wrong).
I obtain the connecitonstring during authentication of the client and set on the context then, using the following IConnectinProvider implementation, set that value for the CS whenever a session is opened:
/// <summary>
/// Provides ability to switch connection strings of an NHibernate Session Factory (use same factory for multiple, dynamically specified, database connections)
/// </summary>
public class DynamicDriverConnectionProvider : DriverConnectionProvider, IConnectionProvider
{
protected override string ConnectionString
{
get
{
var cxnObj = IsWebContext ?
HttpContext.Current.Items["RequestConnectionString"]:
System.Runtime.Remoting.Messaging.CallContext.GetData("RequestConnectionString");
if (cxnObj != null)
return cxnObj.ToString();
//catch on app startup when there is not request connection string yet set
return base.ConnectionString;
}
}
private static bool IsWebContext
{
get { return (HttpContext.Current != null); }
}
}
Then wire it in during NHConfig:
var configuration = Fluently.Configure()
.Database(MsSqlConfiguration.MsSql2005
.Provider<DynamicDriverConnectionProvider>() //Like so

SQL Server 2005 Connection Question

In SQL Server 2005, is there a way to specify more than one connection string from within a .NET Application, with one being a primary preferred connection, but if not available it defaults to trying the other connection (which may be going to a diff DB / server etc)?
If nothing along those exact lines, is there anything we can use, without resorting to writing some kind of round-robin code to check connections?
Thanks.
We would typically use composition on our SqlConnection objects to check for this. All data access is done via backend classes, and we specify multiple servers within the web/app.config. (Forgive any errors, I am actually writing this out by hand)
It would look something like this:
class MyComponent
{
private SqlConnection connection;
....
public void CheckServers()
{
// Cycle through servers in configuration files, finding one that is usable
// When one is found assign the connection string to the SqlConnection
// a simple but resource intensive way of checking for connectivity, is by attempting to run
// a small query and checking the return value
}
public void Open()
{
connection.Open();
}
public ConnectionState State
{
get {return connection.State;}
set {connection.State = value;}
}
// Use this method to return the selected connection string
public string SelectedConnectionString
{
get { return connection.ConnectionString; }
}
//and so on
}
This example includes no error checking or error logging, make sure you add that, so the object can optionally report which connections failed and why.
Assuming that you'd want to access the same set of data, then you'd use clustering or mirroring to provide high availability.
SQLNCLI provider supports SQL Server database mirroring
Provider=SQLNCLI;Data Source=myServer;Failover Partner=myMirrorServer
Clustering just uses the virtual SQL instance name.
Otherwise, I can't quite grasp why you'd want to do this...
Unfortunately there are no FCL methods that do this - you will need to implement this yourself.