I would like to create(if not exists) partition tables for each month for some time in the future:
execute """
CREATE TABLE IF NOT EXISTS #{table}_p#{start_date.year}_#{month}
PARTITION OF #{table} FOR VALUES
FROM ('#{start_date}')
TO ('#{stop_date}')
"""
I run it dynamically, eg: for next 12 month starting from today.
I would like to make it during migration, but run it each time migration starts. I can't use Repo since in time of the migration it's not yet started. I didn't find any ability to do it with Ecto.Migration api.
Do you have any ideas how to achieve this?
I made it with support of feature added in ecto 3.5.0. I added separate migration folder wich is used for so called 'repeated_migrations'.
#doc """
repeated migrations are run each deploy(e.g: ensure partitions). Repetition achieved by down method that does nothing.
"""
def run_repeated_migrations do
for repo <- repos() do
path = Ecto.Migrator.migrations_path(repo, "repeated_migrations")
{:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, path, :up, all: true))
{:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, path, :down, all: true))
end
end
method in repeated_migrations/20210427134701_partition_creation.exs
def down do
# to make this migration repeatable we run 'up' following by 'down'. Down do nothing.
execute "SELECT 1"
end
Related
I've been playing around with Elixir/Phoenix third-party modules. ( Modules that are used to fetch some data from a 3rd party service ) One of those module looking like so:
module TwitterService do
#twitter_url "https://api.twitter.com/1.1"
def fetch_tweets(user) do
# The actual code to fetch tweets
HTTPoison.get(#twitter_url)
|> process_response
end
def process_response({:ok, resp}) do
{:ok, Poison.decode! resp}
end
def process_response(_fail), do: {:ok, []}
end
The actual data doesn't matter in my question. So now, I'm interested in how can I dynamically configure the #twitter_url module variable in tests to make some of the tests fail on purpose. For example:
module TwitterServiceTest
test "Module returns {:ok, []} when Twitter API isn't available"
# I'd like this to be possible ( coming from the world of Rails )
TwitterService.configure(:twitter_url, "new_value") # This line isn't possible
# Now the TwiterService shouldn't get anything from the url
tweets = TwitterService.fetch_tweets("test")
assert {:ok, []} = tweets
end
end
How can I achieve this?
Note: I know I can use :configs to configure #twiter_url separately in dev and test environments, but I'd like to be able to test on a real response from the Twitter API too, and that would change the URL on the entire Test environment.
One of the solutions that I came up with was
def fetch_tweets(user, opts \\ []) do
_fetch_tweets(user, opts[:fail_on_test] || false)
end
defp _fetch_tweets(user, [fail_on_test: true]) do
# Fails
end
defp _fetch_tweets(user, [fail_on_test: false]) do
# Normal fetching
end
But that just seems hackish and silly, there must be a better solution to this.
As it was suggested by José in Mocks And Explicit Contracts, the best way would be probably to use a dependency injection:
module TwitterService do
#twitter_url "https://api.twitter.com/1.1"
def fetch_tweets(user, service_url \\ #twitter_url) do
# The actual code to fetch tweets
service_url
|> HTTPoison.get()
|> process_response
end
...
end
Now in tests you just inject another dependency when necessary:
# to test against real service
fetch_tweets(user)
# to test against mocked service
fetch_tweets(user, SOME_MOCK_URL)
This approach will also make it easier to plug in different service in the future. The processor implementation should not depend on it’s underlying service, assuming the service follows some contract (responds with json given a url in such a particular case.)
config sounds like a good way here. You can modify the value in the config at runtime in your test and then restore it after the test.
First, in your actual code, instead of #twitter_url, use Application.get_env(:my_app, :twitter_url).
Then, in your tests, you can use a wrapper function like this:
def with_twitter_url(new_twitter_url, func) do
old_twitter_url = Application.get_env(:my_app, :twitter_url)
Application.set_env(:my_app, :twitter_url, new_twitter_url)
func.()
Application.set_env(:my_app, :twitter_url, old_twitter_url)
end
Now in your tests, do:
with_twitter_url "<new url>", fn ->
# All calls to your module here will use the new url.
end
Make sure you're not using async tests for this as this technique modifies global environment.
I'm currently testing controller that uses the function create_zone that depends on a function that retrieves a user to associates said user to a zone and then creates a participant entry that is only an association table of both entries.
def create_zone(attrs \\ %{}, user_id) do
user = Accounts.get_user!(user_id)
with{:ok, %Zone{} = zone} <- %Zone{}
|> Zone.changeset(attrs,user)
|> Repo.insert()
do
create_participant(zone,user)
end
end
And I would like to test it using ExUnit but the problem is that the testing framework tries to search a non existent record in the database.
** (Ecto.NoResultsError) expected at least one result but got none in query:
from u in Module.Accounts.User,
where: u.id == ^1
How could I mock or create it just for testing purposes?
Don't mock it, create it with ex_machina: https://github.com/thoughtbot/ex_machina
Mocking is discouraged in Elixir: http://blog.plataformatec.com.br/2015/10/mocks-and-explicit-contracts/ (you don't really need to read it now, but in case you are want to mock some external resource, read it).
You can write a simple factory module that uses Ecto to insert into the database. The test will be wrapped in a database transaction and rolled back automatically thanks to the Ecto.Sandbox.
defmodule Factory do
def create(User) do
%User{
name: "A. User",
email: "user_#{:rand.uniform(10000)}#mail.com"
}
end
def create(Zone) do
%Zone{
# ... random / default zone attributes here...
}
end
def create(schema, attrs) do
schema
|> create()
|> struct(attributes)
end
def insert(schema, attrs \\ []) do
Repo.insert!(create(schema, attrs))
end
end
Then in your test custom attributes are merged with the factory defaults, including associations.
test "A test" do
user = Factory.insert(User, name: "User A")
zone = Zones.create_zone(user.id)
assert zone
end
See chapter 7 of what's new in ecto 2.1 for a more detailed explanation.
We want to dynamically trigger integration tests in different downstream builds in jenkins. We have a parametrized integration test project that takes a test name as a parameter. We dynamically determine our test names from the git repo.
We have a parent project that uses jenkins-cli to start a build of the integration project for each test found in the source code. The parent project and integration project are related via matching fingerprints.
The problem with this approach is that the aggregate test results doesn't work. I think the problem is that the "downstream" integration tests are started via jenkins-cli, so jenkins doesn't realize they are downstream.
I've looked at many jenkins plugins to try to get this working. The Join and Parameterized Trigger plugins don't help because they expect a static list of projects to build. The parameter factories available for Parameterized Trigger won't work either because there's no factory to create an arbitrary list of parameters. The Log Trigger plugin won't work.
The Groovy Postbuild Plugin looks like it should work, but I couldn't figure out how to trigger a build from it.
def job = hudson.model.Hudson.instance.getJob("job")
def params = new StringParameterValue('PARAMTEST', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)
This is what finally worked for me.
NOTE: The Pipeline Plugin should render this question moot, but I haven't had a chance to update our infrastructure.
To start a downstream job without parameters:
job = manager.hudson.getItem(name)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
manager.hudson.queue.schedule(job, 0, causeAction)
To start a downstream job with parameters, you have to add a ParametersAction. Suppose Job1 has parameters A and C which default to "B" and "D" respectively. I.e.:
A == "B"
C == "D"
Suppose Job2 has the same A and B parameters, but also takes parameter E which defaults to "F". The following post build script in Job1 will copy its A and C parameters and set parameter E to the concatenation of A's and C's values:
params = []
val = ''
manager.build.properties.actions.each {
if (it instanceof hudson.model.ParametersAction) {
it.parameters.each {
value = it.createVariableResolver(manager.build).resolve(it.name)
params += it
val += value
}
}
}
params += new hudson.model.StringParameterValue('E', val)
paramsAction = new hudson.model.ParametersAction(params)
jobName = 'Job2'
job = manager.hudson.getItem(jobName)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
def waitingItem = manager.hudson.queue.schedule(job, 0, causeAction, paramsAction)
def childFuture = waitingItem.getFuture()
def childBuild = childFuture.get()
hudson.plugins.parameterizedtrigger.BuildInfoExporterAction.addBuildInfoExporterAction(
manager.build, childProjectName, childBuild.number, childBuild.result
)
You have to add $JENKINS_HOME/plugins/parameterized-trigger/WEB-INF/classes to the Groovy Postbuild plugin's Additional groovy classpath.
Execute this Groovy script
import hudson.model.*
import jenkins.model.*
def build = Thread.currentThread().executable
def jobPattern = "PUTHEREYOURJOBNAME"
def matchedJobs = Jenkins.instance.items.findAll { job ->
job.name =~ /$jobPattern/
}
matchedJobs.each { job ->
println "Scheduling job name is: ${job.name}"
job.scheduleBuild(1, new Cause.UpstreamCause(build), new ParametersAction([ new StringParameterValue("PROPERTY1", "PROPERTY1VALUE"),new StringParameterValue("PROPERTY2", "PROPERTY2VALUE")]))
}
If you don't need to pass in properties from one build to the other just take the ParametersAction out.
The build you scheduled will have the same "cause" as your initial build. That's a nice way to pass in the "Changes". If you don't need this just do not use new Cause.UpstreamCause(build) in the function call
Since you are already starting the downstream jobs dynamically, how about you wait until they done and copy the test result files (I would archive them on the downstream jobs and then just download the 'build' artifacts) to the parent workspace. You might need to aggregate the files manually, depending if the Test plugin can work with several test result pages. In the post build step of the parent jobs configure the appropriate test plugin.
Using the Groovy Postbuild Plugin, maybe something like this will work (haven't tried it)
def job = hudson.getItem(jobname)
hudson.queue.schedule(job)
I am actually surprised that if you fingerprint both jobs (e.g. with the BUILD_TAG variable of the parent job) the aggregated results are not picked up. In my understanding Jenkins simply looks at md5sums to relate jobs (Aggregate downstream test results and triggering via the cli should not affect aggregating results. Somehow, there is something additional going on to maintain the upstream/downstream relation that I am not aware of...
This worked for me using "Execute system groovy
script"
import hudson.model.*
def currentBuild = Thread.currentThread().executable
def job = hudson.model.Hudson.instance.getJob("jobname")
def params = new StringParameterValue('paramname', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)
I’m developing an application dedicated to generate statistical reports, I would like that user after saving their stat report they save sql queries too. To do that I wrote the following module:
module SqlHunter
class ActiveRecord::ConnectionAdapters::AbstractAdapter
##queries = []
cattr_accessor :queries
def log_info_with_trace(sql, name, runtime)
return unless #logger && #logger.debug?
##queries << sql
end
alias_method_chain :log_info, :trace
end
end
in the controller I wrote that
sqlfile = File.open("public/advancedStats/#{#dir_name}/advancedStatQuery.sql", 'w')
#queries = ActiveRecord::ConnectionAdapters::AbstractAdapter::queries
for query in #queries do
sqlfile.write("#{query} \n")
end
sqlfile.close
I also modified Rails environment by adding this line:
ActiveRecord::Base.logger.level = Logger::DEBUG
This program is working and I can get all queries but, I need only the specific queries done by one user to generate a stat report.
Is someone has any idea,
Thanks,
mgatri
You could add an accessor that says if you wish to log or not.
##queries = []
##loging = false
cattr_accessor :queries, :logging
def log_info_with_trace(sql, name, runtime)
return unless #logger && #logger.debug?
##queries << sql if ##logging
end
When you do a ActiveRecord::ConnectionAdapters::AbstractAdapter::logging = true
Then, your SQL queries will be logged. When you set it to false, they won't.
So you can log them only whenever you want.
To erase the old queries, you just need to clear the array.
ActiveRecord::ConnectionAdapters::AbstractAdapter::queries.clear
I'm developing an application using Ruby on Rails.
I would like to erase old queries in the ActiveRecord::Base.logger object every time when I call a new action, essentially where ENV = production.
The goal is not to erase all queries like using config.log_level :info. I need only last queries to build a file with those queries.
Here is some code:
in the lib:
module SqlHunter
class ActiveRecord::ConnectionAdapters::AbstractAdapter
##queries = [] # without this line it work perfectly
##logging = false
cattr_accessor :queries, :logging
def log_info_with_trace(sql, name, runtime)
##queries << sql if ##logging
end
alias_method_chain :log_info, :trace
end
end
in the controller (report action)
ActiveRecord::ConnectionAdapters::AbstractAdapter::logging = true
.....
sqlfile = File.open("public/advancedStats/#{#dir_name}/advancedStatQuery.sql", 'w')
#queries = ActiveRecord::ConnectionAdapters::AbstractAdapter::queries
for query in #queries do
sqlfile.write("#{query} \n")
end
sqlfile.close
I asked an old related question here
link text
Thanks to Tamás Mezei and Damien MATHIEU for their last answer
Mondher
So you want to filter the SQL queries in production mode or am I missing the point?
If it's about just filtering, prod mode will automatically filter sql queries. If you'd like to filter when developing, edit the config/environments/development.rb file and insert
config.log_level = :info
Essentially, it will filter SQL with all the other stuff that's below info level (debug stuff).
If you want some more sophisticated solution, you can always exend/override the AbstractAdapter class in
RUBY_HOME/lib/gems/1.8/gems/activerecord-nnn/active_record/connection_adapters/abstract_adapter.rb