DBT Cloud: downstream models not building - dbt

I have a dbt cloud project that has erp_distributor_selected.sql as its first model. There are downstream models e.g. erp_sel_flatfile_transaction_references2.sql.
When I run
dbt run --models +erp_distributor_selected+
the same model doesnt run (waited for hours). Please see image:
However, The strange part is that when I run
dbt run --models erp_sel_flatfile_transaction_references2
the model builds in a few seconds! Please see image:
Anyone else who's faced a similar issue, any pointers would be helpful.

As #BrandenCiranni mentioned, you should make sure that you are using the ref() function to reference the other models. This is how dbt figures out the order of how it needs to run models. If you are already using ref() function throughout your code, please post your sql/dbt_project.yml for more context.

Related

DBT view docs button didn't show up after successfully run `dbt docs generate`

I am new to DBT and followed the course to create models, I run dbt docs generate and it passed. But I didn't see the view docs button show up, it was still dark with a question mark. Does anybody know why that happened? I knew it should be very easy but I am so confused why docs wasn't generated with passed codes. Any insights are much appreciated!
Almost there! Now you have to run dbt docs serve
It sounds like you're using dbt Cloud, not the command line, is that right?
Try deleting your target/ folder and re-running dbt docs generate. If that doesn't work, you can reach out to the support team directly in-app.

How tf.contrib.seq2seq.TrainingHelper can be replaced in TensorFlow 2.2

I am trying to run the project from GitHub but I have a trouble with TrainingHelper. Now, I am stuck with it, I dont know how to convert it to tf2. The console always returns the error like this:
AttributeError: module 'tensorflow_addons.seq2seq' has no attribute 'TrainingHelper'
Please help me!
Seems to be https://www.tensorflow.org/addons/api_docs/python/tfa/seq2seq/TrainingSampler
The api is a bit different, though. Some parameters passed in the constructor in TrainingHelper, are passed in TrainingSampler.initialize() instead. Also there are minimal differences in some of the return values. So, you have to do some adaptation for code migration.

how to fix the problem of downloading fasttext-model300?

I'm using windows 10 and python 3.3. I tried to download fasttext_model300 to calculate soft cosine similarity between documents, but when I run my python file, it stops after arriving at this statement:
fasttext_model300 = api.load('fasttext-wiki-news-subwords-300')
There are no errors or not responding, It just stops without any reaction.
Does anybody know why it happens?
Thanks
I'd recommend against using the gensim api.load() functionality. It dynamically runs new, unversioned source code from remote servers – which is opaque in its operations & suboptimal for maintaining a secure local configuration, or debugging any issues which occur.
Instead, find the actual exact data files you trust and download them as plain data. Then, use specific library operations, like the KeyedVectors.load_word2vec_format() method, so instantiate exactly the model you need, using precise local-file paths you understand.
Following those steps may make it clearer what, if anything, is going wrong. If it doesn't, try also enabling logging at the INFO level to gather more information about what progress is made before failure (and add any new details as a comment or to your question).
python3 -m gensim.downloader --download fasttext-wiki-news-subwords-300
Try using this. Source : https://awesomeopensource.com/project/RaRe-Technologies/gensim-data

tensorflow-serving gives me a "is fed and fetched" error for the same model that works from the command line

When I run a prediction with saved_model_cli it executes properly, but the same inputs give me an error when I try to run it through tensorflow-serving.
It tells me that an item is being both fed and fetched (because it is in the inputs and the outputs) but it works fine from the command line. Does anybody know why it works with one and not the other or how to go about fixing it?
Looks similar to the issue mentioned in the below link :
Placeholder_2:0 is both fed and fetched
Please refer to this and see if it resolves your issue. If not, please share the code snippet to further debug the issue.

Elixir's module attributes in Phoenix production

As far as I know module attributes evaluated during compilation time. I was trying to follow this post about mocking API in Elixir:
defmodule Example.User do
#github_api Application.get_env(:example, :github_api)
def get(username) when is_binary(username) do
#github_api.make_request(:get, "/users/#{username}")
end
end
And I'm wondering if that's gonna work in production at all. As far as I understand when this module is compiled there's no access to the Application. So my question is: can I use module attributes to store some config values that come from Application.get_env?
You absolutely can. As long as the application was compiled using MIX_ENV specified to environment you want the application running under, and as long as that call evaluates to what you expect for that environment, it'll all work fine.
For a deeper look at how module attributes are affected by compilation for an almost identical case as what you've described, take a look at this blog post here.