dynamic content - where do i start? - dynamic

I have a rather simple question. I want to dynamically change the content of a based on the selection of a drop down menu. The drop down menu is populated by a php loop that gets data from a mysql database. So far, I'm only familiar with php and html/css.
I don't really know where to start, or more specifically, what technologies I should be researching. So far, I've heard AJAX, JQuery, JavaScript, NOLOH, HTML iframes, pure CSS, etc. I really just want to know where to look!

Start off with the basics: HTML (if you're feeling adventurous, you could try HTML5) and CSS.
After your comfortable with that, you can start getting a little more complex and pick up JavaScript. This is how the majority of the 'dynamic' bits of web pages are accomplished so learn it well.
Once you've got the JavaScript basics down, you can learn how to use JavaScript to manipulate the DOM (Document Object Model) and make AJAX requests back to the server to get new content to insert.

Related

Arbitrary values for Bootsrap?

I am currently developing a web application. I am using Bootstrap-vue in frontend. Does Bootstrap has feature in which I can create on-the-fly class? Tailwind has it . I tried searching it in the internet but no luck.
Here is my case:
Color values are save in the database.
Every time the page loads, I will fetch those colors and create class based on their colors
Your help is much appreciated. Thanks in advance.
Bootstrap is not really an utility-based CSS framework, hence there is nothing similar to Tailwind because it's not the mindset of the tool.
Also, even if this kind of code exists in Tailwind, it can become funky pretty quick and the best thing is still to write some bare simple vanilla CSS code alongside your template to get what you want.
You won't get any performance benefit by using an arbitrary value anyway and hence it should be used for exceptional cases anyway, a CSS declaration will be far more cleaner.
Feel free to create global CSS variables in vanilla CSS for your use case.

Objective C get html page's links

I'm quite new in Objective C programming and I'm trying to make an application that returns all the link addresses in HTML page. In that case i shouldn't just parse the HTML, but get these links intercepting them from the page's network request.
Is it possible to intercept the application's network requests or something?
Thanks
Coincidentally, Ray Wenderlich's rather AWESOME iOS tutorial site posted this article in the last hour. As you are new to iOS/ObjC, I highly recommend reading it thoroughly.
Let’s say you want to find some information inside a web page and
display it in a custom way in your app.
This technique is called
“scraping.” Let’s also assume you’ve thought through alternatives to
scraping web pages from inside your app, and are pretty sure that’s
what you want to do.
Well then you get to the question – how can you
programmatically dig through the HTML and find the part you’re looking
for, in the most robust way possible? Believe it or not, regular
expressions won’t cut it!
And before you think Regular Expressions might really be an answer, please read this.

Responsive web design

I have 3 css files with me:
skeleton.css
base.css
layout.css
What I want to do is make my web site responsive.
For this, this css files are going to be used in order to make my site responsive.
I have gone through all of the 3 css and it contains media queries and many more.
I want to now that how to use or embed existing style.css with media queries?
how to apply media queries ?
and where to aply media queries?
Skeleton is a responsive CSS framework that works really well. Your best bet is to review the code on Dave's website at http://www.getskeleton.com/ - the code he has posted is very helpful and will give you a great start. I started with Skeleton (http://72t.net) and later moved to Bootstrap.
With all that said, depending on how the code was originally written, it may be a real task trying to convert an existing website to a responsive design. I have now done (or am doing) 4 responsive sites - in each case I found it easier to start from scratch - the original sites were done in Asp.Net with its appropriate bloat. the new sites are html5, CSS, JQuery and Ajax.

Dynamic web page convertible to PDF

I'm thinking about writing a professional CV page that would be easy to update, using a simple backend to add informations and blocks of optional details, and... (feature creap coming)
Anyway, I was thinking of a "simple" web page grpahically, that would easily be convertible to PDF file, using browser functionallity or not.
Assuming that the page have blocks of text that you must ckick a button to see (those are optional details), what should I know or what tools should I use to write this web page?
I'm totally rusted on web code, I used php without ajax a lot before but I understand the idea. I was thinking maybe it would be a good opportunity to try a framework to make a "webapp", like Ruby+ROR or Python+Django? Is that a good idea? I'm ready to learn about those, I'm just not sure if it's worth for such project.
Should I know some things about html code or javascript behaviour that I shouldn't use because it would break any PDF generation tool or something like that?
Any advice on the way to proceed would be helpful.
You'll want to read up on how to create a print stylesheet. This way when you go to print the CV you can choose something like CutePDF Writer and your print stylesheet will automatically be used. You will make your stylesheet show all hidden text blocks and hide things like navigation, buttons, etc.
I can't tell you whether or not it's worth it for you to try a new framework for this project, that's up to you. It's not bad to learn new things. Since I don't know all the details of your project it's hard to answer if it's worth it for this particular project. From your description is sounds like you're just making an HTML resume/CV which sound, to me, like one flat HTML page with some JavaScript. If that's the case you could probably just use a text editor.
If you want my personal opinion, ASP.Net 4 is the way to go if you want to learn something new (or if you just want to use a great framework).
As far as breaking the PDF generation, your print stylesheet will be responsible for showing/hiding things but any JavaScripts should be aware of this as well. Check the link I gave you above for more information.

How can I programmatically obtain content from a website on a regular basis?

Let me preface this by saying I don't care what language this solution gets written in as long as it runs on windows.
My problem is this: there is a site that has data which is frequently updated that I would like to get at regular intervals for later reporting. The site requires JavaScript to work properly so just using wget doesn't work. What is a good way to either embed a browser in a program or use a stand-alone browser to routinely scrape the screen for this data?
Ideally, I'd like to grab certain tables on the page but can resort to regular expressions if necessary.
You could probably use web app testing tools like Watir, Watin, or Selenium to automate the browser to get the values from the page. I've done this for scraping data before, and it works quite well.
If JavaScript is a must, you can try instantiating an Internet Explorer via ActiveX (CreateObject("InternetExplorer.Application")) and use it's Navigate2() Method to open your web page.
Set ie = CreateObject("InternetExplorer.Application")
ie.Visible = True
ie.Navigate2 "http://stackoverflow.com"
After the page has finished loading (check document.ReadyState), you have full access to the DOM and can use whatever methods to extract any content you like.
You can look at Beautiful Soup - being open source python, it is easily programmable. Quoting the site:
Beautiful Soup is a Python HTML/XML parser designed for quick turnaround projects like screen-scraping. Three features make it powerful:
Beautiful Soup won't choke if you give it bad markup. It yields a parse tree that makes approximately as much sense as your original document. This is usually good enough to collect the data you need and run away.
Beautiful Soup provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. You don't have to create a custom parser for each application.
Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. You don't have to think about encodings, unless the document doesn't specify an encoding and Beautiful Soup can't autodetect one. Then you just have to specify the original encoding.
I would recommend Yahoo Pipes, that's exactly what they were built to do. Then you can get the yahoo pipes data as an RSS feed and do as you want with it.
If you are familiar with Java (or perhaps, other language that runs on a JVM such as JRuby, Jython, etc.), you can use HTMLUnit; HTMLUnit simulates a complete browser; http requests, creating a DOM for each page and running Javascript (using Mozilla's Rhino).
Additionally, you can run XPath queries on documents loaded in the simulated browser, simulate events, etc.
http://htmlunit.sourceforge.net
Give Badboy a try. It's meant to automate the system testing of your websites but you may find it's regular expression rules handy enough to do what you want.
If you have Excel then you should be able to import the data from the webpage into Excel.
From the Data menu select Import External Data and then New Web Query.
Once the data is in Excel then you can either manipulate it within Excel or output it in a format (e.g. CSV) you can use elsewhere.
In compliment to Whaledawg's suggestion, I was going to suggest using an RSS scraper application (do a Google search) and then you can get nice raw XML to programmatically consume instead of a response stream. There may even be a few open-source implementation which would give you more of an idea if you wanted to implement yourself.
You could use the Perl module LWP, with module JavaScript. While this may not be the quickest to set up, it should work reliably. I would definitely not have this be your first foray into Perl though.
I recently did some research on this topic. The best resource I found is this Wikipedia article, which gives links to many screen scraping engines.
I needed to have something that I can use as a server and run it in batch, and from my initial investigation, I think Web Harvest is quite good as an open source solution, and I have also been impressed by Screen Scraper, which seems to be very feature rich and you can use it with different languages.
There is also a new project called Scrapy, haven't checked it out yet, but it's a python framework.