Can I version control CloudFlare configuration? - cloudflare

I am utilizing Cloudflare for a public website. Recently, I have been adjusting many different configuration values via the website/UI. Is there a way to download/upload the configuration so that it can be version-controlled?

You can configure Cloudflare using Terraform. Check out Terraform Cloudflare Provider here.
You can use a tool called cf-terraforming delivered by Cloudflare that allows to download the Cloudflare setup into Terraform.

Here is a quick demo of what it would look like using Terraform:
locals {
domain = "example.com"
hostname = "example.com" # TODO: Varies by environment
}
variable "CLOUDFLARE_ACCOUNT_ID" {}
variable "CLOUDFLARE_API_TOKEN" { sensitive = true }
provider "cloudflare" {
account_id = var.CLOUDFLARE_ACCOUNT_ID
api_token = var.CLOUDFLARE_API_TOKEN
}
resource "cloudflare_zone" "default" {
zone = local.domain
plan = "free"
}
resource "cloudflare_record" "a" {
zone_id = cloudflare_zone.default.id
name = local.hostname
value = "192.0.2.1"
type = "A"
ttl = 1
proxied = true
}
Find a complete example, see Terraform Starter Kit:
infra/dns-zone.tf
infra/dns-records.tf
It works especially great with Terraform Cloud "backend" which provides a free account.

Related

Azure Virtual Network subnet connection issues

I'm having is that I have one Vnet with 2x /27 subnets that have been delegated to WebApps.
webApp-1 -> subnet1
WebApp-2 -> subnet2.
I've terraformed the Vnet:
resource "azurerm_resource_group" "main-rg"{
name = "main-rg"
location = "westeurope"
}
resource "azurerm_virtual_network" "main-vnet" {
name = "main-vnet"
location = azurerm_resource_group.main-rg.location
resource_group_name = azurerm_resource_group.main-rg.name
address_space = ["172.25.44.0/22"]
subnet {
name = "test"
address_prefix = "172.25.44.64/27"
security_group = ""
}
}
resource "azurerm_subnet" "subnet1" {
name = "subnet1"
resource_group_name = azurerm_resource_group.main-rg.name
virtual_network_name = azurerm_virtual_network.main-vnet.name
address_prefixes = ["172.25.44.0/27"]
delegation {
name = "webapp1delegation"
service_delegation {
name = "Microsoft.Web/serverFarms"
actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
}
}
}
resource "azurerm_subnet" "subnet2" {
name = "subnet2"
resource_group_name = azurerm_resource_group.main-rg.name
virtual_network_name = azurerm_virtual_network.main-vnet.name
address_prefixes = ["172.25.44.32/27"]
delegation {
name = "webapp2delegation"
service_delegation {
name = "Microsoft.Web/serverFarms"
actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
}
}
}
The problem I'm having is when I'm trying to connect the WebApps to their respective subnets.
FYI: I'm connecting the WebApps from the Azure Portal (old test resources, don't want to import them as they will be removed soon).
The first on (WebApp1 to Subnet1) works out fine.
When I then try to connect WebApp2 to Subnet2 it fails, but I am able to connect WebApp2 to Subnet1.
I also tried the other way around; I'm able to connect both apps to Subnet2 (but I first have to disconnect both apps from Subnet1).
I'm not seeing any error messages other than a little "Connection failed" popup in the Portal UI.
So I guess my question is: is it not possible to have 2x subnets with WebApp-delegations in one Vnet, or am I missing something?
And again, sorry if this is something blatantly obvious that I've overlooked.
In advance; thanks!
It's not possible to have two web apps in the same app service plan to use differently integrated subnets because all web apps in the same app service plan could share the VNet integration. However, you can use each integrated subnet for each app service plan. You could read the limitations:
The integration subnet can be used by only one App Service plan.
You can have only one regional VNet Integration per App Service plan.
Multiple apps in the same App Service plan can use the same VNet.
To avoid any issues with subnet capacity, a /26 subnet address mask with 64 addresses is the recommended size.

Terraform use backend on module

I need to create optimize the structure of terraform.
Have on root path variables which I imported like module
/variables.tf
variable "aws_profile" { default = "default" }
variable "aws_region" { default = "us-east-1" }
After have a module folder
/ec2_instance/main.tf
module "global_vars" {
source = "../"
}
provider "aws" {
region = module.global_vars.aws_region
profile = module.global_vars.aws_profile
}
terraform {
backend "s3" {
encrypt = true
bucket = "some_bucket"
key = "path_to_statefile/terraform.tfstate"
region = "region"
profile = "profile"
}
}
module "instances_cluster" {
some actions
}
It's working, but I need to move backend and provider part to main.tf on root folder and after include like the module.
How I can do this?
I have tried to create /main.tf on root folder with backend part, but they are not working and backed writing state files locally.
You'd have to a bit of refactoring but these are the steps I would take
Run terraform plan in root and ec2_instance modules to verify zero changes so refactoring can begin
Comment out the backend for ec2_instance/main.tf
Place the backend from ec2_instance/main.tf into root main.tf
In the root main.tf, make a reference to ec2_instance module
Run terraform plan in root module and note the creations and deletions
For each creations and deletion pair, create a terraform state mv statement and run each
Verify the terraform plan has zero changes

S3A client and local S3 mock

To create end-to-end local tests of data workflow I utilize "mock S3" container (e.g adobe/S3Mock). Seems to work just fine. However, some parts of the system rely on S3A client. As far as I see, its format does not allow to point to particular nameserver or endpoint.
Is it possible to make S3A work in local environment?
you talking about the ASF Hadoop S3A Connector? Nobody has tested against S3 mock AFAIK (never seen it before!), but it does work with non-AWS endpoints
set fs.s3a.endpoint to the URL of your S3 connection. There's some settings about switching from https to http (fs.s3a.connection.ssl.enabled = false) and moving from virtual hosts to directories (fs.s3a.path.style.access = true) which will also be needed.
further reading
Like I said: nobody has done this. We developers just go against the main AWS endpoints with its problems (latency, inconsistency, error reporting, etc), precisely because its what you get in production. But for your local testing, it will simplify your life (and you can run it under jenkins without having to give it any secrets)
Answer by #stevel worked for me. Here is the code if someone wants to refer
class S3WriterTest {
private static S3Mock api;
private static AmazonS3 mockS3client;
#BeforeAll
public static void setUp() {
//start mock s3 service using findify
api = new S3Mock.Builder().withPort(8001).withInMemoryBackend().build();
api.start();
/* AWS S3 client setup.
* withPathStyleAccessEnabled(true) trick is required to overcome S3 default
* DNS-based bucket access scheme
* resulting in attempts to connect to addresses like "bucketname.localhost"
* which requires specific DNS setup.
*/
EndpointConfiguration endpoint = new EndpointConfiguration("http://localhost:8001", "us-west-2");
mockS3client = AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(endpoint)
.withPathStyleAccessEnabled(true)
.withCredentials(new AWSStaticCredentialsProvider(new AnonymousAWSCredentials()))
.build();
mockS3client.createBucket("test-bucket");
}
#AfterAll
public static void tearDown() {
api.shutdown();
}
#Test
void unitTestForHadoopCodeWritingUsingS3A {
Configuration hadoopConfig = getTestConfiguration();
........
}
private static Configuration getTestConfiguration() {
Configuration config = new Configuration();
config.set("fs.s3a.endpoint", "http://127.0.0.1:8001");
config.set("fs.s3a.connection.ssl.enabled", "false");
config.set("fs.s3a.path.style.access", "true");
config.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider");
config.set("fs.s3a.access.key", "foo");
config.set("fs.s3a.secret.key", "bar");
return config;
}
}

Sense/net using content query in web application

I try to use content query in web application but it throw an exception " Lucene.Net.Store.AlreadyClosedException: this IndexReader is closed". Please give help me resolve that problem.
var startSettings = new RepositoryStartSettings
{
Console = Console.Out,
StartLuceneManager = true, // <-- this is necessary
IsWebContext = false,
PluginsPath = AppDomain.CurrentDomain.BaseDirectory,
};
using (Repository.Start(startSettings))
{
var resultQuery = ContentQuery.Query("+InTree:#0 + DisplayName:*#1*", null, folderPath, q);
}
The recommended way to connect to Sense/Net from a different application (app domain) is through the REST API. It is much easier to maintain and involves less configuration (the only exception is where you are working inside the Sense/Net application itself, or you only have a single application and you do not want to access Sense/Net from anywhere else, and you are willing to deal with a local index of Sense/Net and all the config values it needs, etc).
Connecting through the REST API does not mean you have to send HTTP requests manually (although that is also not complicated at all): there is a .Net client library which does that for you. You can access all content metadata or binaries through the client, you can upload files, query content, manage permissions, etc.
// loading a content
dynamic content = await Content.LoadAsync(id);
DateTime date = content.BirthDate;
// querying
var results = await Content.QueryAsync(queryText);
Install: https://www.nuget.org/packages/SenseNet.Client
Source and examples: https://github.com/SenseNet/sn-client-dotnet
To use it in a web application, you have to do the following:
initialize the client context once, at the beginning of the application life cycle (e.g. app start)
if you need to make requests to Sense/Net in the name of the currently logged in user (e.g. because you want to query for documents accessible by her), than you have to create a new ServerContext object for every user with the username/password of that user, and provide this object to any client call (e.g. load or save content methods).
var sc = new ServerContext
{
Url = "http://example.com",
Username = "user1",
Password = "asdf"
};
var content = await Content.LoadAsync(id, sc);

Akka.net cluster broadcast only received by one node

I learned from the Akka.net WebCrawler and created my own cluster test. I have a Processor node(Console App) and an API node(SignalR). Here are the configurations.
Processor node:
akka {
actor{
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
deployment {
/dispatcher/signalR {
router = broadcast-group
routees.paths = ["/user/signalr"]
cluster {
enabled = on
#max-nr-of-instances-per-node = 1
allow-local-routees = false
use-role = api
}
}
}
}
remote {
log-remote-lifecycle-events = DEBUG
helios.tcp {
port = 0
hostname = 127.0.0.1
}
}
cluster {
seed-nodes = ["akka.tcp://stopfinder#127.0.0.1:4545"]
roles = [processor]
}
}
API node: (Non seed-node will have port = 0)
akka {
actor{
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
}
remote {
log-remote-lifecycle-events = DEBUG
helios.tcp {
port = 4545
hostname = 127.0.0.1
}
}
cluster {
seed-nodes = ["akka.tcp://stopfinder#127.0.0.1:4545"]
roles = [api]
}
}
Inside of the API node, I created a normal actor called SignalR.
Inside of the processor node I created a normal actor and used the Scheduler to Tell() the API node's signalR actor some string.
This works great when I have one Processor and one API. It also works when I have multiple Processor nodes and a single API node. Unfortunately, when I have multiple API node, no matter how I setup the configuration, the "tell" won't tell all of the API nodes; the message only goes to one of them. Which node receives the message is based on the API node start sequence. It seems that I have the all API nodes registered in the cluster correctly, but I could be wrong.
I'm starting to feel that this is a configuration or understanding issue. Can anyone share any insights?
I did some additional testing. The behavior remains the same when I replace the ASP.NET SignalR API node with a normal console application.
UPDATE: I contacted Akka.NET team. This behavior is a known bug. It will be fixed in 1.1 release.
UPDATE 2: The issue has been marked as fixed for the proejct on GitHub.