Load template from a string instead of from a file - django-templates

I have decided to save templates of all system emails in the DB.
The body of these emails are normal django templates (with tags).
This means I need the template engine to load the template from a string and not from a file. Is there a way to accomplish this?

Instantiate a django.template.Template(), passing the string to use as a template.

To complement the answer from Ignacio Vazquez-Abrams, here is the code snippet that I use to get a template object from a string:
from django.template import engines, TemplateSyntaxError
def template_from_string(template_string, using=None):
"""
Convert a string into a template object,
using a given template engine or using the default backends
from settings.TEMPLATES if no engine was specified.
"""
# This function is based on django.template.loader.get_template,
# but uses Engine.from_string instead of Engine.get_template.
chain = []
engine_list = engines.all() if using is None else [engines[using]]
for engine in engine_list:
try:
return engine.from_string(template_string)
except TemplateSyntaxError as e:
chain.append(e)
raise TemplateSyntaxError(template_string, chain=chain)
The engine.from_string method will instantiate a django.template.Template object with template_string as its first argument, using the first backend from settings.TEMPLATES that does not result in an error.

Using django Template together with Context worked for me on >= Django 3.
from django.template import Template, Context
template = Template('Hello {{name}}.')
context = Context(dict(name='World'))
rendered: str = template.render(context)

Related

spaCy custom component with additional parameter

Is there a way to add an additional parameter (despite the doc) to a custom component?
More specifically I want to add some rule to a custom component as parameter.
match_word = "Anna"
def comp1(doc, match_word):
if match_word ==doc[0]:
# modify doc
return doc
nlp = add_pipe(comp1)
doc = nlp("Anna Smith lives in Melbourne")
Problem is that I pass a function to add_pipe without being able to pass any parameters.
Any hints why this does not make sense at all or how to solve it (differently) are very welcome.

Calling a feature and passing in javascript variables

Been trying to figure this out for a while, please could someone help.
I have a set of 5 lines which I'd like to make reusable.
The lines do a "check event XXX has fired".
The lines make use of the "karate" variable and also the "json" command.
They're of the form:
* def message = myUtils.grabEvent(karate, myMessageListener)
* json event = message.text
* match event contains { ... some json in here ... }
* json eventPayload = event.payload
* match event contains { ... some payload json in here ... }
How do I go about making this reusable?
I have tried:
(A) Putting it all into a Javascript function
This failed because I don't know how to replicate the "json" command in Javascript
(B) Putting it all into a .feature file and calling that
This failed because I don't know how to pass the "karate" and "myMessageListener" variables into parameters of the .feature file.
Is it possible to put this into a reusable code block, please?
TIA
Yes I would recommend making this a reusable feature. Refer the documentation here: https://github.com/intuit/karate#calling-other-feature-files
And passing parameters is simple it would look like:
* def result = call read('reusable.feature')
Because by default, the "called" feature will "inherit" the variables of the calling feature.

Scrapy + extract only text + carriage returns in output file

I am new to Scrapy and trying to extract content from web page, but getting lots of extra characters in the output. See image attached.
How can I update my code to get rid of the characters? I need to extract only the href from the web page.
My code:
class AttractionSpider(CrawlSpider):
name = "get-webcontent"
start_urls = [
'http://quotes.toscrape.com/page/1/'
]
rules = ()
def create_dirs(dir):
if not os.path.exists(dir):
os.makedirs(dir)
else:
shutil.rmtree(dir) #removes all the subdirectories!
os.makedirs(dir)
def __init__(self, name=None, **kwargs):
super(AttractionSpider, self).__init__(name, **kwargs)
self.items_buffer = {}
self.base_url = "http://quotes.toscrape.com/page/1/"
from scrapy.conf import settings
settings.overrides['DOWNLOAD_TIMEOUT'] = 360
def write_to_file(file_name, content_list):
with open(file_name, 'wb') as fp:
pickle.dump(content_list, fp)
def parse(self, response):
print ("Start scrapping webcontent....")
try:
str = ""
hxs = Selector(response)
links = hxs.xpath('//li//#href').extract()
with open('test1_href', 'wb') as fp:
pickle.dump(links, fp)
if not links:
return
log.msg("No Data to scrap")
for link in links:
v_url = ''.join( link.extract() )
if not v_url:
continue
else:
_url = self.base_url + v_url
except Exception as e:
log.msg("Parsing failed for URL {%s}"%format(response.request.url))
raise
def parse_details(self, response):
print ("Start scrapping Detailed Info....")
try:
hxs = Selector(response)
yield l_venue
except Exception as e:
log.msg("Parsing failed for URL {%s}"%format(response.request.url))
raise
Now I must say... obviously you have some experience with Python programming congrats, and you're obviously doing the official Scrapy docs tutorial which is great but for the life of me I have no idea exactly given the code snippet you have provided of what you're trying to accomplish. But that's ok, here's a couple of things:
You are using a Scrapy crawl spider. When using a cross spider the rules set the follow or pagination if you will as well as pointing in a car back to the function when the appropriate regular expression matches the rule to a page to then initialize the extraction or itemization. This is absolutely crucial to understand that you cannot use a crossfire without setting the rules and equally as important when using a cross spider you cannot use the parse function, because the way the cross spider is built parse function is already a native built-in function within itself. Do go ahead and read the documents or just create a cross spider and see how it doesn't create in parse.
Your code
class AttractionSpider(CrawlSpider):
name = "get-webcontent"
start_urls = [
'http://quotes.toscrape.com/page/1/'
]
rules = () #big no no ! s3 rul3s
How it should look like
class AttractionSpider(CrawlSpider):
name = "get-webcontent"
start_urls = ['http://quotes.toscrape.com'] # this would be cosidered a base url
# regex is our bf, kno him well, bassicall all pages that follow
#this pattern ... page/.* (meant all following include no exception)
rules = (
Rule(LinkExtractor(allow=r'/page/.*'), follow=True),callback='parse_item'),
)
Number two: go over the thing I mentioned about using the parts function with a Scrapy crawl spider, you should use "parse-_item"; I assume that you at least looked over the official docs but to sum it up, the reason that it cannot be used this because the crawl spider already uses Parts within its logic so by using Parts within a cross spider you're overriding a native function that it has and can cause all sorts of bugs and issues.
That's pretty straightforward; I don't think I have to go ahead and show you a snippet but feel free to go to the official Docs and on the right side where it says "spiders" go ahead and scroll down until you hit "crawl spiders" and it gives some notes with a caution...
To my next point: when you go from your initial parts you are not (or rather you do not) have a call back that goes from parse to Parts details which leads me to believe that when you perform the crawl you don't go past the first page and aside from that, if you're trying to create a text file (or you're using the OS module 2 write out something but you're not actually writing anything) so I'm super confused to why you are using the right function instead of read.
I mean, myself I have in many occasions use an external text file or CSV file for that matter that includes multiple URLs so I don't have to stick it in there but you're clearly writing out or trying to write to a file which you said was a pipeline? Now I'm even more confused! But the point is that I hope you're well aware of the fact that if you are trying to create a file or export of your extracted items there are options to export and to three already pre-built formats that being CSV JSON. But as you said in your response to my comment that if indeed you're using a pipeline and item and Porter intern you can create your own format of export as you so wish but if it's only the response URL that you need why go through all that hassle?
My parting words would be: it would serve you well to go over again Scrapy's official docs tutorial, at nauseam and stressing the importance of using also the settings.py as well as items.py.
# -*- coding: utf-8 -*-
import scrapy
import os
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from quotes.items import QuotesItem
class QcrawlSpider(CrawlSpider):
name = 'qCrawl'
allowed_domains = ['quotes.toscrape.com']
start_urls = ['http://quotes.toscrape.com/']
rules = (
Rule(LinkExtractor(allow=r'page/.*'), callback='parse_item', follow=True),
)
def parse_item(self, response):
rurl = response.url
item = QuotesItem()
item['quote'] =response.css('span.text::text').extract()
item['author'] = response.css('small.author::text').extract()
item['rUrl'] = rurl
yield item
with open(os.path.abspath('') + '_' + "urllisr_" + '.txt', 'a') as a:
a.write(''.join([rurl, '\n']))
a.close()
Of course, the items.py would be filled out appropriately by the ones you see in the spider but by including the response URL both itemized I can do both writing out given even the default Scrappy methods CSV etc or I can create my own.
In this case being a simple text file but one can get pretty crafty; for example, if you write it out correctly that's the same using the OS module you can, for example as I have create m3u playlist from video hosting sites, you can get fancy with a custom CSV item exporter. But even with that then using a custom pipeline we can then write out a custom format for your csvs or whatever it is that you wish.

Read Velocity Tokens/Tag from .vm file

I have an application where in I am trying to create a velocity template repository which will help me centralise all my email templates and will allow me to create a communication hub. All templates will be called at runtime and populates with data via services.
My problem is that I need to provide users with optional and compulsory params list when they define the template inputs for the velocity template.
Is there a way to read the tokens/tags from the velocity template file and extract them??
Like I want a list of tokens $name.address.streetName to be available to me from .vm file.
I do not want to go for Regex .
I do not have to cache or reuse them , its just going to be a one time read and store the default,compulsory & optional params in the database.
I am following these patterns : http://kickjava.com/src/org/apache/velocity/test/view/TemplateNodeView.java.htm
How to use String as Velocity Template?
Please advice.
I got it working like this
RuntimeServices runtimeServices = RuntimeSingleton.getRuntimeServices();
StringReader reader = new StringReader(String velocityTemplateBodu);
SimpleNode node = runtimeServices.parse(reader, "dummyOne.vm");
for(int i=0; i<node.jjtGetNumChildren();i++){
if(node.jjtGetChild(i) instanceof org.apache.velocity.runtime.parser.node.ASTReference ){
System.out.println("Node -----------------"+i +"---"+node.jjtGetChild(i).literal());
}
}
Using SimpleNode class you get all the nodes on the .vm file.
The Nodes are read using javaCC as ASTReference and ASTText (both extend SimpleNode). To get the tokens you need to get the ASTReference and to get HTML text use the ASTText.

Django Compressor with dynamic LESS file raises a FilterError(err)

I had to come up with quite a complicated setup to enable database based styling options for users. Users enter styles (like background color, font face, etc...) in the django admin backend.
I am creating a dynamic LESS file by rendering a template view as plain text view like so:
views.py:
class PlainTextView(TemplateView):
"""
Write customized settings into a special less file to overwrite the standard styling
"""
template_name = 'custom_stylesheet.txt'
def get_context_data(self, **kwargs):
context = super(PlainTextView, self).get_context_data(**kwargs)
try:
#get the newest PlatformCustomizations dataset which is also activated
platform_customizations = PlatformCustomizations.objects.filter(apply_customizations=True).order_by('-updated_at')[0]
except IndexError:
platform_customizations = ''
context.update({
'platform_customizations': platform_customizations,
})
return context
def render_to_response(self, context):
return super(PlainTextView, self).render_to_response(context, content_type='plain/text')
The template custom_stylesheet.txt looks kind of like this. It takes the database styling entries the users entered in the admin backend:
#CIBaseColor: {{ dynamic_styles.ci_base_color }};
#CIBaseFont: {{ dynamic_styles.ci_base_font }};
...etc...
Now I include this dynamic less files in my main.less file with other normal static LESS files. Like so:
main.less:
#import "bootstrap_variables.less";
//this is the dynamicly created custom stylesheet out of the dynamic_styles app
#import url(http://127.0.0.1:8000/dynamic_styles/custom_stylesheet.less);
//Other styles
#import "my_styles.less";
This setup works fine. The dynamic variables out of my database get rendered into the template and LESS compiles all my less files together.
I have a problem when pushing the code to my production setup where I compile the LESS server side and compress it with django-compressor.
I get the following error:
FilterError: [31mFileError: 'http://127.0.0.1:8000/dynamic_styles/custom_stylesheet.less' wasn't found.
[39m[31m in [39m/home/application/***/media/static-collected/styles/less/main.less[90m:13:0[39m
[90m12 //this is the dynamicly created custom stylesheet out of the dynamic_styles app[39m
13 [7m[31m[1m#[22mimport url(http://127.0.0.1:8000/dynamic_styles/custom_stylesheet.less);[39m[27m
[90m14 [39m[0m
Has anybody ever experienced problems with django compressor like that? Does it have problems with dynamically created files like this? Could the absolute url be a problem?
Could you think of another solution get dynamically generated less files working with django compressor?
I guess Django-Compressor can't read dynamically created less files which are only available "on-the-fly" if you hit the url. At least I did not get it working. Also the file needs to be on the COMPRESS_ROOT.
Now I write the less file to disk physically every time the model gets saved. Here's the code. It still needs some improvement like try except, etc. But it works:
def save(self, *args, **kwargs):
#todo add try except
less_file = open(os.path.join(settings.PROJECT_ROOT, 'media', 'static', "styles", "less", "custom_stylesheet.less"), "w")
less_file.write(render_to_string('template/custom_stylesheet.txt', {'platform_customizations': self}))
less_file.close()
super(PlatformCustomizations, self).save(*args, **kwargs)