Error: Could not parse rfc1738 URL from string. I tried everything, I think and I cannot solve it.
app.config['SQLALCHEMY_DATABASE_URI'] = 'C:/dev/FlaskBlog/blog.db'
db = SQLAlchemy(app)
class Blogpost(db.Model):
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(50))
subtitle = db.Column(db.String(50))
author = db.Column(db.String(20))
date_posted = db.Column(db.DateTime)
content = db.Column(db.Text)
Problem appears when I try to post an article.
Here is addpost route:
#app.route('/addpost', methods=['POST'])
def addpost():
if request.method == 'POST':
title = request.form['title']
subtitle = request.form['subtitle']
author = request.form['author']
content = request.form['content']
post = Blogpost(title=title, subtitle=subtitle, author=author, date_posted=datetime.now())
db.session.add(post)
db.session.commit()
return redirect(url_for('index'))
else:
return render_template('index.html')
There are no test files here, so I can not test.
But I have encountered the same error, and my database is SQLite. The following codes work fine for me:
from sqlalchemy import create_engine
# relative path on Linux: with three slashes
e = create_engine('sqlite:///relative/path/to/database.db')
# absolute path on Linux: with four slashes
e = create_engine('sqlite:////absolute/path/to/database.db')
# absolute path on Windows
e = create_engine('sqlite:///C:\\absolute\\path\\to\\database.db')
And in your case, I think you can change app.config['SQLALCHEMY_DATABASE_URI'] = 'C:/dev/FlaskBlog/blog.db' to app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///C:\\dev\\FlaskBlog\\blog.db'.
For detailed documents: SQLAlchemy 1.4 Documentation
The SQLALCHEMY_DATABASE_URI should be a valid db uri, fix that and it should work. I also struggled with the same problem.
Related
I'm trying to make a test for this view:
def author_detail(request, pk):
author = get_object_or_404(Author, pk=pk)
blog = author.blog_set.all()
paginator = Paginator(blog, 1)
page_number = request.GET.get('page')
page_obj = paginator.get_page(page_number)
context = {
'author': author,
'page_obj': page_obj,
}
return render(request, 'blog/author_detail.html', context=context)
The view is working normally. My problem is when I'm going to try to test this view. Here my test:
class AuthorDetailViewTest(TestCase):
def setUp(self):
user = User.objects.create(username='user01', password='123456')
self.author_instance = Author.objects.create(
user=user, date_of_birth='1998-09-08', bio='I am user01')
topic = Topic.objects.create(name='Testing')
Blog.objects.create(title='My blog', content="It's my blog")
Blog.author = self.author_instance
Blog.topic = topic
# The author.blog_set.all() are returning an empty QuerySet
# This problem are only happening in the tests, not in the view
def test_pagination_first_page(self):
response = self.client.get(
reverse('author-detail', kwargs={'pk':self.author_instance.pk}))
self.assertEqual(len(response.context['page_obj']), 1)
The result are:
FAIL: test_pagination_first_page (blog.tests.test_views.AuthorDetailViewTest)
-------------------------------------------------------------------
Traceback (most recent call last):
File "/home/carlos/problem/venv_01/the_blog/blog/tests/test_views.py", line 189,in test_pagination_first_page
self.assertEqual(len(response.context['page_obj']), 1)
AssertionError: 0 != 1
----------------------------------------------------------------------
The len(response.context['page_obj']) is equal 0. It should be at least 1, because I created one Blog object. When I print the QuerySet of author.blog_set.all(), the returned QuerySet are empty (<QuerySet []>). I think that the problem is in the creation of the Blog model, because the author and topic fields are ManyToManyField.
As I mentioned before, my problem is in the test, not in the view. The view is working normally.
The last 3 lines of the following code snippet have some issues:
def setUp(self):
user = User.objects.create(username='user01', password='123456')
self.author_instance = Author.objects.create(
user=user, date_of_birth='1998-09-08', bio='I am user01')
topic = Topic.objects.create(name='Testing')
Blog.objects.create(title='My blog', content="It's my blog")
Blog.author = self.author_instance
Blog.topic = topic
The blog object is created but never returned/fetched
Blog model is being used to connect author and topic. Instead, the blog object should be used.
Author and Topic are M2M on Blog. The new objects should be added via add method. See How to add data into ManyToMany field? for additional context.
Solution:
def setUp(self):
user = User.objects.create(username='user01', password='123456')
author = Author.objects.create(
user=user, date_of_birth='1998-09-08', bio='I am user01')
blog = Blog.objects.create(
title='My blog', content="It's my blog")
blog.author.add(author)
blog.topic.add(topic)
It worked.
I'm starting to learn Flask and maybe I'm just using the wrong words to search, but here's the problem.
I have this class:
class Video(Resource):
#marshal_with(resource_fields)
def get(self, video_id):
result = VideoModel.query.filter_by(id=video_id).first()
if not result:
abort(404, message="Could not find video with that id")
return result
#marshal_with(resource_fields)
def put(self, video_id):
args = video_put_args.parse_args()
result = VideoModel.query.filter_by(id=video_id).first()
if result:
abort(409, message="Video id taken...")
video = VideoModel(id=video_id, name=args['name'], views=args['views'], likes=args['likes'])
db.session.add(video)
db.session.commit()
return video, 201
#marshal_with(resource_fields)
def patch(self, video_id):
args = video_update_args.parse_args()
result = VideoModel.query.filter_by(id=video_id).first()
if not result:
abort(404, message="Video doesn't exist, cannot update")
if args['name']:
result.name = args['name']
if args['views']:
result.views = args['views']
if args['likes']:
result.likes = args['likes']
db.session.commit()
return result
And I'm trying to make so when I have an URL like http://127.0.0.1/5000/video/put/1/Test/80/2, I can use the funtion from the URL by typing it. Is there any way to do this? Thanks in advance!
I'm setting up a new scrapy spider and developed
I am using windows 10 and it's running.
My problem is extracting text from different element. This elements sometime on (strong tag, p,) sometime have class , sometime have id but i need to implement to one element to extracting a row text.
Please checkout the link of site
https://exhibits.otcnet.org/otc2019/Public/eBooth.aspx?IndexInList=404&FromPage=Exhibitors.aspx&ParentBoothID=&ListByBooth=true&BoothID=193193&fromFeatured=1
https://exhibits.otcnet.org/otc2019/Public/eBooth.aspx?IndexInList=0&FromPage=Exhibitors.aspx&ParentBoothID=&ListByBooth=true&BoothID=202434
https://exhibits.otcnet.org/otc2019/Public/eBooth.aspx?IndexInList=1218&FromPage=Exhibitors.aspx&ParentBoothID=&ListByBooth=true&BoothID=193194&fromFeatured=1
https://prnt.sc/nkl1vc,
https://prnt.sc/nkl1zy,
https://prnt.sc/nkl247,
# -*- coding: utf-8 -*-
import scrapy
class OtcnetSpider(scrapy.Spider):
name = 'otcnet'
# allowed_domains = ['otcnet.org']
start_urls = ['https://exhibits.otcnet.org/otc2019/Public/Exhibitors.aspx?Index=All&ID=26006&sortMenu=107000']
def parse(self, response):
links = response.css('a.exhibitorName::attr(href)').extract()
for link in links:
ab_link = response.urljoin(link)
yield scrapy.Request(ab_link, callback=self.parse_p)
def parse_p(self, response):
url = response.url
Company = response.xpath('//h1/text()').extract_first()
if Company:
Company = Company.strip()
Country = response.xpath('//*[#class="BoothContactCountry"]/text()').extract_first()
State = response.xpath('//*[#class="BoothContactState"]/text()').extract_first()
if State:
State = State.strip()
Address1 = response.xpath('//*[#class="BoothContactAdd1"]/text()').extract_first()
City = response.xpath('//*[#class="BoothContactCity"]/text()').extract_first()
if City:
City = City.strip()
zip_c = response.xpath('//*[#class="BoothContactZip"]/text()').extract_first()
Address = str(Address1)+' '+str(City)+' '+str(State)+' '+str(zip_c)
Website = response.xpath('//*[#id="BoothContactUrl"]/text()').extract_first()
Booth = response.css('.eBoothControls li:nth-of-type(1)::text').extract_first().replace('Booth: ','')
Description = ''
Products = response.css('.caption b::text').extract()
Products= ', '.join(Products)
vid_bulien = response.css('.aa-videos span.hidden-md::text').extract_first()
if vid_bulien=="Videos":
vid_bulien = "Yes"
else:
vid_bulien = "No"
Video_present = vid_bulien
Conference_link = url
Categories = response.css('.ProductCategoryLi a::text').extract()
Categories = ', '.join(Categories)
Address = Address.replace('None','')
yield {
'Company':Company,
'Country':Country,
'State':State,
'Address':Address,
'Website':Website,
'Booth':Booth,
'Description':Description,
'Products':Products,
'Video_present':Video_present,
'Conference_link':Conference_link,
'Categories':Categories
}
I expect the output would be a row description from different element
According to this post and excellent #dimitre-novatchev answer you need to find a node-set intersection:
$ns1 for your page is:
//p[#class="BoothProfile"]/following-sibling::p
$ns2 is:
p[#class="BoothProfile"]/following-sibling::div[1]/preceding-sibling::p
as a result you need to process these p elements:
//p[#class="BoothProfile"]/following-sibling::p[count(.|//p[#class="BoothProfile"]/following-sibling::div[1]/preceding-sibling::p) = count(//p[#class="BoothProfile"]/following-sibling::div[1]/preceding-sibling::p)]
You can use this Scrapy code:
for p_elem in response.xpath('//p[#class="BoothProfile"]/following-sibling::p[count(.|//p[#class="BoothProfile"]/following-sibling::div[1]/preceding-sibling::p) = count(//p[#class="BoothProfile"]/following-sibling::div[1]/preceding-sibling::p)]'):
# using string() to stringify <p>
Description += p_elem.xpath('string(.)').extract_first()
I want to combine two requests to the Google cloud text-to-speech API in a single mp3 output. The reason I need to combine two requests is that the output should contain two different languages.
Below code works fine for many language pair combinations, but unfortunately not for all. If I request e.g. a sentence in English and one in German and combine them everything works. If I request one in English and one in Japanes I can't combine the two files in a single output. The output only contains the first sentence and instead of the second sentence, it outputs silence.
I tried now multiple ways to combine the two outputs but the result stays the same. The code below should show the issue.
Please run the code first with:
python synthesize_bug.py --t1 'Hallo' --code1 de-De --t2 'August' --code2 de-De
This works perfectly.
python synthesize_bug.py --t1 'Hallo' --code1 de-De --t2 'こんにちは' --code2 ja-JP
This doesn't work. The single files are ok, but the combined files contain silence instead of the Japanese part.
Also, if used with two Japanes sentences everything works.
I already filed a bug report at Google with no response yet, but maybe it's just me who is doing something wrong here with encoding assumptions. Hope someone has an idea.
#!/usr/bin/env python
import argparse
# [START tts_synthesize_text_file]
def synthesize_text_file(text1, text2, code1, code2):
"""Synthesizes speech from the input file of text."""
from apiclient.discovery import build
import base64
service = build('texttospeech', 'v1beta1')
collection = service.text()
data1 = {}
data1['input'] = {}
data1['input']['ssml'] = '<speak><break time="2s"/></speak>'
data1['voice'] = {}
data1['voice']['ssmlGender'] = 'FEMALE'
data1['voice']['languageCode'] = code1
data1['audioConfig'] = {}
data1['audioConfig']['speakingRate'] = 0.8
data1['audioConfig']['audioEncoding'] = 'MP3'
request = collection.synthesize(body=data1)
response = request.execute()
audio_pause = base64.b64decode(response['audioContent'].decode('UTF-8'))
raw_pause = response['audioContent']
ssmlLine = '<speak>' + text1 + '</speak>'
data1 = {}
data1['input'] = {}
data1['input']['ssml'] = ssmlLine
data1['voice'] = {}
data1['voice']['ssmlGender'] = 'FEMALE'
data1['voice']['languageCode'] = code1
data1['audioConfig'] = {}
data1['audioConfig']['speakingRate'] = 0.8
data1['audioConfig']['audioEncoding'] = 'MP3'
request = collection.synthesize(body=data1)
response = request.execute()
# The response's audio_content is binary.
with open('output1.mp3', 'wb') as out:
out.write(base64.b64decode(response['audioContent'].decode('UTF-8')))
print('Audio content written to file "output1.mp3"')
audio_text1 = base64.b64decode(response['audioContent'].decode('UTF-8'))
raw_text1 = response['audioContent']
ssmlLine = '<speak>' + text2 + '</speak>'
data2 = {}
data2['input'] = {}
data2['input']['ssml'] = ssmlLine
data2['voice'] = {}
data2['voice']['ssmlGender'] = 'MALE'
data2['voice']['languageCode'] = code2 #'ko-KR'
data2['audioConfig'] = {}
data2['audioConfig']['speakingRate'] = 0.8
data2['audioConfig']['audioEncoding'] = 'MP3'
request = collection.synthesize(body=data2)
response = request.execute()
# The response's audio_content is binary.
with open('output2.mp3', 'wb') as out:
out.write(base64.b64decode(response['audioContent'].decode('UTF-8')))
print('Audio content written to file "output2.mp3"')
audio_text2 = base64.b64decode(response['audioContent'].decode('UTF-8'))
raw_text2 = response['audioContent']
result = audio_text1 + audio_pause + audio_text2
with open('result.mp3', 'wb') as out:
out.write(result)
print('Audio content written to file "result.mp3"')
raw_result = raw_text1 + raw_pause + raw_text2
with open('raw_result.mp3', 'wb') as out:
out.write(base64.b64decode(raw_result.decode('UTF-8')))
print('Audio content written to file "raw_result.mp3"')
# [END tts_synthesize_text_file]ls
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('--t1')
parser.add_argument('--code1')
parser.add_argument('--t2')
parser.add_argument('--code2')
args = parser.parse_args()
synthesize_text_file(args.t1, args.t2, args.code1, args.code2)
You can find the answer here:
https://issuetracker.google.com/issues/120687867
Short answer: It's not clear why it is not working, but Google suggests a workaround to first write the files as .wav, combine and then re-encode the result to mp3.
I have managed to do this in NodeJS with just one function (idk how optimal is it, but at least it works). Maybe you could take inspiration from it
I have used memory-streams dependency from npm
var streams = require('memory-streams');
function mergeAudios(audios) {
var reader = new streams.ReadableStream();
var writer = new streams.WritableStream();
audios.forEach(element => {
if (element instanceof streams.ReadableStream) {
element.pipe(writer)
}
else {
writer.write(element)
}
});
reader.append(writer.toBuffer())
return reader
}
Input parameter is a list which contain ReadableStream or responce.audioContent from synthesizeSpeech operation. If it is readablestream, it uses pipe operation, if it is audiocontent, it uses write method. At the end all content is passed into an readabblestream.
I am trying to create a comments application to use it everywhere where I need it, so I geuss I have to use ContentType to attach comments to different models of my project.
so here:
my model:
class Comment(models.Model):
user = models.ForeignKey(User, blank=True, null=True)
text = models.TextField((u'Текст комментария'))
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
content_object = generic.GenericForeignKey('content_type', 'object_id')
my view:
def add_comment(request):
if request.method == 'POST':
form = CommentForm(request.POST)
if form.is_valid():
new_comment = Comment()
new_comment.text = request.POST['text']
new_comment.content_type = ???
new_comment.object_id = request.POST['object_id']
new_comment.user = request.user
new_comment.save()
return HttpResponseRedirect(request.META['HTTP_REFERER'])
else: ...
How can I get a content type of the current model I am working with?
I have app NEWS and model Post in it, so I want to comments my Posts.
I know I can use ContentType.objects.get(app_label="news", model="post"), but I am getting exact value, so in that way my comment app will not be multipurpose.
P.S. sorry for bad English.
Check django.contrib.comments.forms.CommentForm.get_comment_create_data: It returns a mapping to be used to create an unsaved comment instance:
return dict(
content_type = ContentType.objects.get_for_model(self.target_object),
object_pk = force_unicode(self.target_object._get_pk_val()),
user_name = self.cleaned_data["name"],
user_email = self.cleaned_data["email"],
user_url = self.cleaned_data["url"],
comment = self.cleaned_data["comment"],
submit_date = datetime.datetime.now(),
site_id = settings.SITE_ID,
is_public = True,
is_removed = False,
)
So I guess that the line your are looking for is:
content_type = ContentType.objects.get_for_model(self.target_object),
Remenber, self is the form instance, and self.target_object() returns the instance that the current comment is attached to.