I need a low level OLE control for reading, creating and modifying pdf files. I will use it with Visual FoxPro. By low level I mean that I won't need any extra software to install on the client machine.
Long story short, I need something like iText.
Preferably free or cheap.
Thanks.
QuickPDF includes a pure Win32 DLL based version and is reasonably cheap at $249 with no runtime license fees. You would just need to copy the DLL in the same directory as your application.
www.quickpdf.com
I don't know what you mean about "open"ing DLLs, however, VFP CAN make calls to DLLs via DECLARE statements to expose them... its done quite regularly with many Windows API calls.
However, you specific need of dealing with PDFs to modify poses another question... What is it you plan on trying to "modify"?
Related
Ultimate goal would be using VB.net to interface with webcam and do image processing.
Currently I'm just using Matlab, but it is insanely slow. Since I'm going into the area of image processing,coupled with object recognition, which path should I go down to? This by meaning is it GDI+? DirectX? or some other APIs? What is the API that supports manipulating and analyzing graphical input data? By which I may go delve deeper, and create a standalone software just for my own interest/project.
Before going deep into digital image processing with VB.Net, I strongly suggest that you take your time to learn the basics first, after that moving on to the next step which is dealing with the APIs you mentioned.
However, to answer your question, API (Application Programming Interface) is a set of programming instructions and standards communicate your application with other applications.
Which basically allows two different pieces of software to speak to one another through a common interface.
As for the DLL (Dynamic link library) files, they are a set of executable functions or data that can be used by a Windows application.
Or as I quote from Wikipedia:
Dynamic-link library (also written unhyphenated), or DLL, is Microsoft's implementation of the shared library concept in the Microsoft Windows and OS/2 operating systems. These libraries usually have the file extension DLL, OCX (for libraries containing ActiveX controls), or DRV (for legacy system drivers). The file formats for DLLs are the same as for Windows EXE files — that is, Portable Executable (PE) for 32-bit and 64-bit Windows, and New Executable (NE) for 16-bit Windows. As with EXEs, DLLs can contain code, data, and resources, in any combination.
Basically, you shouldn't go too deep while you are in the beginning and I strongly encourage you to start learning the language itself then, step by step until you master the language.
I would like to say something to you IvanWong....Welcome to the world of programming, Fun and challenges!!!
My scenario is as follows, a .NET 4.0 Solution with several projects (one host and some dlls), one (dll) in particular making use of the MySQL .NET Connector in order to only call upon stored procedures. I've also signed all my assemblies with a private key.
I'm curious if a hacker could somehow obtain the password to the database user from the connection string (even though that user only has permission to EXECUTE).
Also I'm curious whether a hacker would be able to find out the stored procedures I call on and whether he could Select/Insert/Delete arbitrarily with the use of my stored procedures.
All this with the presumption that the hacker only has a copy of the compiled solution.
If you have hard-coded the user name and password in your code, or even if you store the credentials in an encrypted file, but the encryption key is hard-coded in your code, it is easily hackable because .NET assemblies are not compiled to machine code. When you build a .NET assembly, it converts your VB.NET code into MSIL code, which is just a lower level programing language which is still easily readable. Microsoft provides a free tool (as part of the .NET Framework SDK) called MSIL Disassembler which allows you to easily view all the MSIL code for any compiled assembly. There are many tools available which allow you to easily take the MSIL code and "decompile" it to VB or C# code. You'd be amazed at how easy it is to reproduce your original code with a .NET decompiler. If the PDB's are available, the output is shockingly similar to the original code. So, unless you are using some third-party code obfuscation or assembly encryption tool, your "compiled" assemblies are very easily reverse engineered. Usually this isn't a big deal, but if avoiding hacking is a high concern, you certainly shouldn't be putting any secrets in your code.
Are you confident that all of the code in the project is safe from things like SQL Injection vulnerabilities and Local File Inclusion vulnerabilities (and will remain so, as it evolves)?
I've got a game in VB6 and it works great and all, but I have been toying with the idea of creating a scripting engine. Ii'm thinking I'd like VB6 to read in flat text script files for me and then lex/parse/execute them.
I have good programming experience, and I've built a simple C compiler, as well as a LOGO emulator before.
My question is:
Are there any tools that I can use, like Lexx/Yakk/Bison to help me? How should I approach this problem in regards to lexing, parsing, and feeding the commands back to VB6 so I can handle them? Is this idea a BAD IDEA in the sense that there are too many obstacles in the way (For example, building minesweeper in assembly, though not impossible, is very difficult, and a bad idea.)?
Use the Microsoft® Windows® Script Control because it is easy to integrate into existing VB6 applications. The control supports VBScript, JScript, or any other "Active Script" implementation.
I have used the Windows Script Control in four projects and it works extremely well. Very easy to integrate. I wish Microsoft would have given us a replacement in .NET, and made it as easy to use. (I understand the control is not needed in .NET, but having the ability to simply create an object that handles everything is nice.)
Windows Script Control
The Microsoft® Windows® Script Control
is an ActiveX® control that provides
developers with an easy way to make
their applications scriptable. This,
in turn, enables users to extend
application functionality through
scripts, much as they do with macros
today.
INFO: Where to Obtain the Script Control at http://support.microsoft.com/kb/184739. Includes links to other howto support articles.
Chapter 13: Adding Scripting Support to Your Application at http://msdn.microsoft.com/en-us/library/aa227413(VS.60).aspx
Designing a Calculator at http://msdn.microsoft.com/en-us/library/aa227421(VS.60).aspx
How To Use Script Control Modules and Procedures Collections, Inserted from http://support.microsoft.com/kb/184745
How To Use the AddObject Method of the Script Control, Inserted from http://support.microsoft.com/kb/185697
SAMPLE: SCRIPTEX.EXE Uses the ScriptControl with Visual Basic, Inserted from http://support.microsoft.com/kb/189484
Windows Script Control can be downloaded at http://www.microsoft.com/downloads/details.aspx?familyid=d7e31492-2595-49e6-8c02-1426fec693ac&displaylang=en. (Supported Operating Systems: Windows 2000; Windows 98 Second Edition; Windows ME; Windows NT; Windows Server 2003; Windows XP)
MSDN Search of "MSScriptControl.ScriptControlClass" at http://social.msdn.microsoft.com/Search/en-US?query=%22MSScriptControl.ScriptControlClass%22&ac=8
MSDN Search of "Windows Script Control" at http://social.msdn.microsoft.com/Search/en-US?query=%22Windows+Script+Control%22&ac=8
MSDN Search of "MSSCRIPT" at http://social.msdn.microsoft.com/Search/en-US?query=MSSCRIPT&ac=8
Unless you're doing it for your own instruction, you may want to try using Lua: VB6 - Lua Integration
If you're willing to use VBScript rather than VB6 you might be able to just use the MSScriptControl to run the commands rather than creating your own. Here's an article discussing using it from a .Net app, though it's an ActiveX control so should give you quite a bit of flexibility.
The control can be downloaded from here.
I've actually seen some quite reasonable implementations of compilers/interpreters in VB6[1] - It's not the language I would choose (few functional features, insufficent static type system), but with experience, you can outweigh these drawbacks and be quite productive - So why not.
You can use the GOLD parser generator that supports VB6 as a start.
[1]: Somewhere on PSC or in this download repository I think ...
Note that there is the MSScriptControl too.
There also appears to be an additonal alternative for VB6:
SadScript is an variant of VB6 most prominently used for VB6 as an scripting engine in MMORPGS .
See here for more : What is sadscript? Can I use it in vb.net? Why hasn't anyone I have asked heard of it?
Windows still use DLLs and Mac programs seem to not use DLL at all. Are there benefits or disadvantages of using either technique?
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
MacOS X, like other flavours of Unix, use shared libraries, which are just another form of DLL.
And yes both are advantageous as the DLL or shared library code can be shared between multiple processes. It does this by the OS loading the DLL or shared library and mapping it into the virtual address space of the processes that use it.
On Windows, you have to use dynamically-loaded libraries because GDI and USER libraries are avaliable as a DLL only. You can't link either of those in or talk to them using a protocol that doesn't involve dynamic loading.
On other OSes, you want to use dynamic loading anyway for complex apps, otherwise your binary would bloat for no good reason, and it increases the probably that your app would be incompatible with the system in the long run (However, in short run static linking can somewhat shield you from tiny breaking changes in libraries). And you can't link in proprietary libraries on OSes which rely on them.
Windows still use DLLs and Mac
programs seem to not use DLL at all.
Are they benefits or disadvantages of
using either technique?
Any kind of modularization is good since it makes updating the software easier, i.e. you do not have to update the whole program binary if a bug is fixed in the program. If the bug appears in some dll, only the dll needs to be updated.
The only downside with it imo, is that you introduce another complexity into the development of the program, e.g. if a dll is a c or c++ dll, different calling conventions etc.
If a program installation includes all
the DLL it requires, will it be the
same as statically linking all the
libraries?
More or less yes. Depends on if you are calling functions in a dll which you assume static linkage with. The dll could just as well be a "free standing" dynamic library, that you only can access via LoadLibrary() and GetProcAddress() etc.
One big advantage of shared libraries (DLLs on Windows or .so on Unix) is that you can rebuild the library and its consumers separately while with static libraries you have to rebuild the library and then relink all the consumers which is very slow on Unix systems and not very fast on Windows.
MacOS software uses "dll's" as well, they are just named differently (shared libraries).
Dll's make sense if you have code you want to reuse in different components of your software. Mostly this makes sense in big software projects.
Static linking makes sense for small single-component applications, when there is no need for code reuse. It simplifies distribution since your component has no external dependencies.
Besides memory/disk space usage, another important advantage of using shared libraries is that updates to the library will be automatically picked up by all programs on the system which use the library.
When there was a security vulnerability in the InfoZIP ZIP libraries, an update to the DLL/.so automatically made all software safe which used these. Software that was linked statically had to be recompiled.
Windows still use DLLs and Mac programs seem to not use DLL at all. Are they benefits or disadvantages of using either technique?
Both use shared libraries, they just use a different name.
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
Somewhat. When you statically link libraries to a program, you will get a single, very big file, with DLLs, you will have many files.
The statically linked file won't need the "resolve shared libraries" step (which happens while the program loads). A long time ago, loading a static program meant that the whole program was first loaded into RAM and then, the "resolve shared libraries" step happened. Today, only the parts of the program, which are actually executed, are loaded on demand. So with a static program, you don't need to resolve the DLLs. With DLLs, you don't need to load them all at once. So performance wise, they should be on par.
Which leaves the "DLL Hell". Many programs on Windows bring all DLLs they need and they write them into the Windows directory. The net effect is that the last installed programs works and everything else might be broken. But there is a simple workaround: Install the DLLs into the same directory as the EXE. Windows will search the current directory first and then the various Windows paths. This way, you'll waste a bit of disk space but your program will work and, more importantly, you won't break anything else.
One might argue that you shouldn't install DLLs which already exist (with the same version) in the Windows directory but then, you're again vulnerable to some bad app which overwrites the version you need with something that breaks your neck. The drawback is that you must distribute security fixes for your app yourself; you can't rely on Windows Update or similar things to secure your code. This is a tight spot; crackers are making lots of money from security issues and people will not like you when someone steals their banking data because you didn't issue security fixes soon enough.
If you plan to support your application very tightly for many, say, 20 years, installing all DLLs in the program directory is for you. If not, then write code which checks that suitable versions of all DLLs are installed and tell the user about it, so they know why your app suddenly starts to crash.
Yes, see this text :
Dynamic linking has the following
advantages: Saves memory and
reduces swapping. Many processes can
use a single DLL simultaneously,
sharing a single copy of the DLL in
memory. In contrast, Windows must load
a copy of the library code into memory
for each application that is built
with a static link library. Saves
disk space. Many applications can
share a single copy of the DLL on
disk. In contrast, each application
built with a static link library has
the library code linked into its
executable image as a separate
copy. Upgrades to the DLL are
easier. When the functions in a DLL
change, the applications that use them
do not need to be recompiled or
relinked as long as the function
arguments and return values do not
change. In contrast, statically linked
object code requires that the
application be relinked when the
functions change. Provides
after-market support. For example, a
display driver DLL can be modified to
support a display that was not
available when the application was
shipped. Supports multilanguage
programs. Programs written in
different programming languages can
call the same DLL function as long as
the programs follow the function's
calling convention. The programs and
the DLL function must be compatible in
the following ways: the order in which
the function expects its arguments to
be pushed onto the stack, whether the
function or the application is
responsible for cleaning up the stack,
and whether any arguments are passed
in registers. Provides a mechanism
to extend the MFC library classes. You
can derive classes from the existing
MFC classes and place them in an MFC
extension DLL for use by MFC
applications. Eases the creation
of international versions. By placing
resources in a DLL, it is much easier
to create international versions of an
application. You can place the strings
for each language version of your
application in a separate resource DLL
and have the different language
versions load the appropriate
resources. A potential
disadvantage to using DLLs is that the
application is not self-contained; it
depends on the existence of a separate
DLL module.
From my point of view an shared component has some advantages that are somtimes realized as disadvantages.
shared component defines interfaces in your process. So you are forced to decide which components/interfaces are visible outside and which are hidden. This automatically defines which interface has to be stable and which does not have to be stable and can be refactored without affecting any code outside the component..
Memory administration in case of C++ and Windows must be well thought. So normally you should not handle memory outside of an dll that isn't freed in the same dll. If you do so your component may fail if: different runtimes or compiler version are used.
So I think that using shared coponents will help the software to get better organized.
Our product currently spans a large number of technologies, including Java, PL/SQL, VB.Net and ABAP. We have a fairly mature source control and build system set up for all of the languages except ABAP, which is still in the stone ages. Since SAP has a build system set up within it, our engineers do all of their development in an SAP environment export transports, and check those into source control. Since we support a number of SAP versions, it becomes very difficult to track versions and migrate code across 4.6, 4.7, 5.0, etc.
My ideal process would be to check the ABAP code into source control in text files, and then load it into SAP and generate the transports as part of the build process. The SAP engineers don't think there are tools to support this model.
If you are managing ABAP code in a source control system, what does your process look like? Are there tools available (preferably command-line) for loading ABAP code into SAP? How do your engineers manage the code/test/debug cycle? Do they code in SAP and then export the code when finished, or edit in an external editor?
I used SAPLINK (mentioned in previous answer) for that purpose. There is also a related project called "zake", that supposedly can automate some of the tasks but I never used it. I simply exported my code manually to so-called slinkees (they contain single objects like function groups; nuggets on the other hand contain several objects).
Reasons to use some external source control system:
correlation to non-abap source code (as our software consisted of .net and abap code)
hosting / maintaining SAP was not something we were exactly good at, so it was good to know you had your code in a safe place
one thing though: you need at least WAS 620 in order to use saplink
I'm interested as to what the benefit is of version control outside the ABAP stack of the SAP system.
I've never seen anyone use external source code control for ABAP, as it's built right in. I've never seen anyone code ABAP outside the SAP system either. It really doesn't fit the model.
SAP's ABAP stack is a single-development system environment. All the developers log on to the one system and develop there. The system records versions automatically, and groups changed objects into transports. A transport is just a list of changed objects. Once you export the transport, the version numbers are incremented for each object and you get the package for the other systems.
The ABAP stack also doesn't really have a "build" concept as such. Everything you do is a patch.
Also check out SAP CTS+ which is used for managing transports and version control of ABAP and JAVA based components.
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e0249083-c0ab-2a10-78b8-b7a7854b1070
At the very least, modifications should be done and tested in an SAP development system. Nobody uses an external editor with ABAP. (SAP Java on the other hand...) There is no reason why you can't keep backups of the SAP code, either directly, as text files, or, (preferably) with SAPLink or transport dump files. (Ask your BASIS people about the transport files). Realize that if you go the text file route, you might miss out on things like field text, etc., which are stored elsewhere in the database.
Hy,
As Dom told you, SAP has it's own version managment. However in order to makes regular save between transport releasing, you might use tools like :
SAPLink (as saied Wili aus Rohr)
ZAPLink
this tools could be use to extract ABAP components into XML. I really do not advise to make automatic import into SAP, for many reasons :
# thoses tools have no guaranties
# not all ABAP Compoponent can be handled like this
# you will lose SAP guaranty if you do this on a productive SAP system
But it might be interesting to use tools like (Google code) to display in detail software change, which could be more complicated on ABAP Object.
I developed this on ZAP Link framework with ZAPLINK_EXTRACTOR program that export SAP Components into XML when they have changed. This prevent XML file to change (new file but same content) and to be detected by tools such as mercurial as a change.
Hope it helps.
Keep in mind that you should use SAP tools to change SAP Component. SAP consultant might explain it to you in details.
Taryck.
[http://www.steria.com Steria (France)]
The 2020 answer to the question is simply: Use AbapGit.
It gives you all the advantages of modern version control, is fully documented, open source and works like a charm.