How to retrieve the published post in wordpress database? - sql

I'm trying to build a custom script of mine and I need to retrieve data from a wordpress database. I have to get the posts content in the database, but when I surf through the wp_posts table in phpmyadmin I see that there are lots of rows with mostly the same content, saved drafts, auto saved posts and so on.
So what I want is a SQL command that will give me the content of the published post and ignore all the drafts and saved posts. It's basically the command that wordpress uses when displaying a post (it displays the final, published one).
Hopefully I was clear enough.

select * from wp_posts
where post_status = 'publish'
and post_type = 'post'

Related

Roblox and using the Asset Item ID in API to direct to the image thumbnail

I am working on a website with the Roblox API and was using the Asset ID to pull thumbnails of different assets in the catalog. The code below was working perfect until a few days ago.
<img src="https://www.roblox.com/asset-thumbnail/image?assetId=<?php echo get_field('roblox-item-id', $p->ID ) ?>&width=75&height=75"
Please note 'roblox-item-id' = Asset ID in my code above
The asset id is still correct because everything works on our website besides the thumbnails.
I saw a few post where a bit ago they changed the URL to get these images to rbxthumb://type=Asset&id=
This method also did not work and people who posted links to the solution in the past now have the same error I am getting.
It seems like there is some kind of undocumented change to access these thumbnails with the asset ID. Does anyone know of a way for me to access them or an alternative method I could use besides the asset id? This happened only days ago and the Roblox Dev forums don't seem to have anyone posting about it.

How to get data concurrency when put in VueJS

I am making a form for candidates to submit recruitment information as shown below.... And the basic information is stored in the candidate information table and the CV file will be saved in the archive of Strapi CMS, Question The problem I am facing is that I want to get the file link after I push it to the repository and put it in the candidate dashboard
The general pattern is that your backend should return an id or url after you post your entitiy. You would then use this returned id to get, update, or delete your entity.
The use of an asynchronous promise is typically used for the initial post, and when resolved with the id, you would update your client dashboard.

How to delete all unpublished post in Sitefinity?

I have 100s of blog posts in unpublished status. What is the best way to delete all unpublished blog posts in Sitefinity? I am in version 9.1.6100.0.
You can create a simple widget that deletes all unpublished blog posts using the Sitefinity API as shown here:
http://docs.sitefinity.com/for-developers-delete-blog-posts#deleting-all-blog-posts
Just make sure you add a Where clause for the blob post status being unpublished.
Then add the widget to a temp page and execute the delete logic.
Then remove the temp page.

can someone post a "multiple table" example yadcf

I'm trying to get the "multiple tables" example from https://github.com/vedmack/yadcf working and i cant seem to get it to work.
Was wondering if anyone could post a zip file of a working example that i could just tweak.
I have a specific outcome i'm trying to test for with multiple tables where the second table gets filtered by the contents of the first table.
example: http://yadcf-showcase.appspot.com/dom_multi_columns_tables_1.10.html
You can grab all the needed files from the yadcf-showcase repo' , here is the link to the zip of the showcase , and this is the relevant html , here its in action in the showcase
you can grab the war folder and place it into your \Public folder of the DropBox and access it via the "Copy public link* that way there will be no need in web server.

Use Scrapy to cut down on Piracy

I am new to using Scrapy and I know very little of the Python Language. So far, I have installed Scrapy and gone through a few tutorials. After that, I have been trying to find a way to search many sites for the same data. My goal is to use Scrapy to find links to "posts" and links for a few search criteria. As an example, I would like to search site A, B, and C. Each site, I would like to see if they have a "post" about app name X, Y, and Z. If they have any "posts" on X, Y, Z. I would like it to grab the link to that post. If it would be easier... It can scan each post for our Company Name. Instead of X, Y, Z it would search the contents of each "post" for [Example Company name]. The reason that I am doing it this way is so that the JSON that is created just has links to the "posts" so that we can review them and contact the website if need be.
I am on Ubuntu 10.12 and I have been able to scrape the sites that we are wanting but I have not been able to narrow down the JSON to the needed info. So currently we are still having to go through hundreds of links, which is what we wanted to avoid by doing this. The reason that we are getting so many links is because all the tutorials that I have found are for scraping a specific HTML tag. I want it to search the tag to see if it contains any part of our App Titles or Package name.
Like this, it displays the post info but now we have to pick out the links from the json. Saves time but still not really what we are wanting. Part of that, I think is that I am not referencing or calling it correctly. Please give me any help that you can. I have spent hours trying to figure this out.
posts = hxs.select("//div[#class='post']")
items = []
for post in posts:
item = ScrapySampleItem()
item["title"] = post.select("div[#class='bodytext']/h2/a/text()").extract()
item["link"] = post.select("div[#class='bodytext']/h2/a/#href").extract()
item["content"] = post.select("div[#class='bodytext']/p/text()").extract()
items.append(item)
for item in items:
yield item
I am wanting to use this to cut down on Piracy of our Android Apps. If I can have this go out and search the Piracy sites that we want, I can then email the Site or Hosting Company with all of the links that we want removed. Under Copy Right law, they have to comply but they require that we link them to every "post" that they infringe upon which is why App Developers normally do not mess with this kind of thing. They have hundreds of apps so finding the links to your apps takes many hours of work.
Thank you for any help you can offer in advance. You will be helping out many App Developers in the long run!
Grady
Your XPath selectors are absolute. They have to be relative to the previous selector (the .)
posts = hxs.select('//div[#class='post']')
for post in posts:
item = ScrapySampleItem()
item['title'] = post.select('.//div[#class="bodytext"]/h2/a/text()').extract()
item['link'] = post.select('.//div[#class="bodytext"]/h2/a/#href').extract()
item['content'] = post.select('.//div[#class="bodytext"]/p/text()').extract()
yield item