Suppose I have written a library in a programming language (e.g. Java) to interact with an external component (e.g. a database).
I would now like the community to provide implementations in other languages.
How can I enable and encourage, other developers to provide implementations that are identical in functionality to mine.
Some examples may be:
Provide a written specification of behaviour
Provide a reference implementation
Provide a testing framework so they can validate their implementation (can this be done across multiple languages?)
What other options are available?
Common approach of all that you are after for, can be the abstraction level of Coding conventions. Since they are set of guidelines for a programming languages that recommend programming style, practices and methods for each aspect of a piece program written. These conventions usually cover file organization,indentation, comments, declarations,statements, white space, naming conventions, programming practices,programming principles, programming rules of thumb, architectural best practices, etc.
About
enable and encourage, other developers to provide implementations that are identical in functionality to mine.
you can use Interfaces (protocols). Even if they are used to define an abstract type that contains no data or code, they also define behaviors as method signatures.
Your examples are good. But in addition to
Provide a testing framework so they can validate their implementation
you can introduce the main ideas of the Test-driven development:
Establishing the goals of different stakeholders required for a vision to be implemented
Drawing out features which will achieve those goals using feature injection
The goal of developer TDD is to specify a detailed, executable design for your solution
read more
Related
I know that frameworks provide useful interfaces and classes that save a lot of time in implementation phase, so my question is:
Should the framework interfaces and classes be included in my project's class
diagram design or not?
and if it is,
Does this affect the reusability of the design if I decided to change
the framework in the future?
UML diagrams are intended to be read by different interest groups. Business likes to see requirements, use cases and activities. Architects/testers need that as a basis to develop/test the system. And the results produced by the architects (static and behavioral class diagrams) are meant to be read by programmers. Each reader group has a focus on certain parts but will eventually peek more or less into border areas (from their perspective).
So to answer your question: yes, frameworks shall be part of the model. Architects should pay attention as to how to cut the system. Frameworks should be designed with a different (broader) scope. So eventually you have frameworks that will be used only partially in a system. Or a system has a potential for a framework and it will be designed to be easily decoupled. Of course, this is a tricky task and architects need lots of experience to fulfill all the needs that come from business and eventually other stakeholders.
No, theoretically it shouldn't, but you're also free to do so.
As stated by the authors of UML: Rumbaugh, Jacobson, Booch
on The Unified Modeling Language Reference Manual at page 25
Across implementation languages and platforms. The UML is intended to be usable for systems implemented in various implementation languages and platforms, including programming languages, databases, 4GLs, organization documents, firmware, and so on. The front-end work should be identical or similar in all cases, while the back-end work will differ somewhat for each medium.
Can anyone think of the shortest possible definition for API, especially for someone who doesn't know programming? I'm using it in an essay and would like to footnote the definition for readers that might not understand the meaning or context of an app programming interface without tripping myself and the flow of the work.
From the wikipedia disambiguation page:
API, originally Advanced Programming Interface but now more commonly known by its near-synonym, Application programming interface, is any defined inter-program interface.
"any defined inter-program interface" is nice, but maybe a little broad for your purposes.
It's a lot of things (see wikipedia). But I usually think of it as the collection of tools and documentation that allow a user to interact with an external library or base of information.
Howstuffworks has a good definition:
An application-programming interface (API) is a set of programming instructions and standards for accessing a Web-based software application or Web tool.
I don't think API implies that the application must be web-based, but I otherwise like this definition.
I am new to Lisp, and I am learning Scheme through the SICP videos. One thing that seems not to be covered (at least at the point where I am) is how to do testing in Lisp.
In usual object oriented programs there is a kind of horizontal separation of concerns: methods are tied to the object they act upon, and to decompose a problem you need to fragment it in the construction of several objects that can be used side by side.
In Lisp (at least in Scheme), a different kind of abstraction seems prevalent: in order to attack a problem you design a hierarchy of domain specific languages, each of which is buil upon the previous one and acts at a coarser level of detail and higher level of abstraction.
(Of course this is a very rough description, and objects can be used vertically, or even as building blocks of DSLs.)
I was wondering whether this has some effect on testing best practices. So the quetsion is two-fold:
What are the best practices while testing in Lisp? Are unit tests as fundamental as in other languages?
What are the main test frameworks (if any) for Lisp? Are there mocking frameworks as well? Of course this will depend on the dialect, but I'd be interested in answers for Scheme, CL, Clojure or other Lisps.
Here's a Clojure specific answer, but I expect most of it would be equally applicable to other Lisps as well.
Clojure has its own testing framework called clojure.test. This lets you simply define assertions with the "is" macro:
(deftest addition
(is (= 4 (+ 2 2)))
(is (= 7 (+ 3 4))))
In general I find that unit testing in Clojure/Lisp follows very similar best practices to testing for other languages. It's the sample principle: you want to write focused tests that confirm your assumptions about a specific piece of code behaviour.
The main differences / features I've noticed in Clojure testing are:
Since Clojure encourages functional programming, it tends to be the case that tests are simpler to write because you don't have to worry as much about mutable state - you only need to confirm that the output is correct for a given input, and not worry about lots of setup code etc.
Macros can be handy for testing - e.g. if you want to generate a large number of tests that follow a similar pattern programatically
It's often handy to test at the REPL to get a quick check of expected behaviour. You can then copy the test code into a proper unit test if you like.
Since Clojure is a dynamic language you may need to write some extra tests that check the type of returned objects. This would be unnecessary in a statically typed language where the compiler could provide such checks.
RackUnit is the unit-testing framework that's part of Racket, a language and implementation that grew out of Scheme. Its documentation contains a chapter about its philosophy: http://docs.racket-lang.org/rackunit/index.html.
Two testing frameworks that I am aware of for Common Lisp are Stefil (in two flavours, hu.dwim.stefil and the older stefil), FiveAM, and lisp-unit. Searching in the quicklisp library list also turned up "unit-test", "xlunit", and monkeylib-test-framework.
I think that Stefil and FiveAM are most commonly used.
You can get all from quicklisp.
Update: Just seen on Vladimir Sedach's blog: Eos, which is claimed to be a drop-in replacement for FiveAM without external dependencies.
I'm from a non-programming background and have often come across the terms like Programming Paradigm, Design Pattern and Application Architecture. Although I think I have a vague understanding of what these terms mean, I'd appreciate if someone could clarify what each is, how it is different from the other and how these concepts apply to Objective C.
Programming Paradigm: Something like "Functional Programming", "Procedural Programming", and "Object Oriented Programming". The programming paradigm and the languages that use them inform how the code gets written. For example, in Object Oriented programming the code is divided up into classes (sometimes a language feature, sometimes not (e.g. javascript)), and typically supports inheritance and some type of polymorphism. The programmer creates the classes, and then instances of the classes (i.e. the objects) to carry out the operation of the program. In functional languages, the state changes on the computer are very heavily controlled by the language itself. Functions are first class objects, although not all languages where functions are first class objects are functional programming language (this topic is one of good debate). Code written with a functional languages involves lots of nested functions, almost every step of the program is new function invocation. For procedural programming, C programs and bash scripting are good examples, you just say do step 1, do step 2, etc, without creating classes and whatnot.
Design Pattern: A design pattern is a useful abstraction that can be implemented in any language. It is a "pattern" for doing things. Like if you have a bunch of steps you want to implement, you might use the 'composite' and 'command' patterns so make your implementation more generic. Think of a pattern as an established template for solving a common coding task in a generic way.
Application Architecture: Takes into consideration how you build a system to do stuff. So, for a web application, the architecture might involve x number of gateways behind a load balancer, that asynchronously feed queues. Messages are picked up by y processes running on z machines, with 1 primary db and a backup slave. Application architecture involves choosing the platform, languages, frameworks used. This is different than software architecture, which speaks more to how to actually implement the program given the software stack.
Some quick definitions,
Application Architecture describes the overall architecture of the software. For instance a web-based programs typically use a layered architecture where functionality is divided to several layers, such as user interface (html generation, handling commands from users), business logic (rules how the functions of the software are executed) and database (for persistent data). In contrast, a data processing application could use a so-called pipes and filters architecture, where a piece of data passes through a pipeline where different modules act on the data.
Design Patterns are a much lower level tool, providing proven models on how to organize code to gain specific functionality while not compromising the overall structure. Easy examples might include a Singleton (how to guarantee the existence of a single instance of a code) or a Facade (how to provide a simple external view to a more complex system).
On the other hand paradigms are the other extreme, guiding the principles on how code is actually laid out, and they each require quite different mindsets to apply. For instance, procedural programming is mainly concerned about dividing the program logic into functions and bundling those functions into modules. Object-oriented programming aims to encapsulate the data and the operations that manipulate the data into objects. Functional programming emphasizes the use of functions instead of separate statements following one another, avoiding side-effects and state changes.
Objective-C is mostly an object-oriented extension to C, design patterns and architecture are not language-specific constructs.
A programming paradigm is a fundamental style of computer programming.
Software Design Pattern - are best practice solutions to common software design problem. There are many design patterns for common problems. To learn more about design patterns you can read some books from this list 5 Best Books for Learning Design Patterns
Application Architecture - Applications Architecture is the science and art of ensuring the suite of applications being used by an organization to create the composite application is scalable, reliable, available and manageable.
I guess any of these terms would apply to all programming languages. Design patterns exists in all programming languages.
These are logical terms defined to create higher level of abstraction.
Hope this helps
Think of the vernacular interpretation of those terms (i.e., outside of the field computer science).
Paradigms are all-encompassing views of computation that affect not only what kinds of things you can do, but even what kinds of thoughts you can have; functional programming is an example of a programming paradigm.
Patterns are simply well-established programming tricks, codified in some semi-formal manner.
Application architecture is a broad term describing how complex applications are organised.
Objective-C primarily adds elements of the OO paradigm to the imperative language, C. Patterns and architecture are largely orthogonal to the language.
Simple English words
A paradigm is a way of thinking when programming, where first class concepts are used to organize the software. Ex oop use classes as first class citizens, functional or lambda calculus use functions and their compositions, aspect uses aspects of a system .... And so on. When thinking a solution the first thing that comes to your mind are the first class citizens. The objective is to organize the solution into software components.
A design pattern is a common successful use of software components.
An application architecture is a set of design patterns put together in order to realize use case scdnarios.
Paradigm: a style or approach to programming. For example, In OOP, we use the concept of objects, classes to overall program. These objects contain data & behaviours & we connect them logically to complete the task.
Design Patterns: tried or tested solution, moreover reusable solutions, to the problem we encounter while everyday programming. For example, if we approach OOP paradigm, there are no. of patterns to help us solve specific problem.
Aspect-oriented programming is a subject matter that has been very difficult for me to find any good information on. My old Software Engineering textbook only mentions it briefly (and vaguely), and the wikipedia and various other tutorials/articles I've been able to find on it give ultra-academic, highly-abstracted definitions of just what it is, how to use it, and when to use it. Definitions I just don't seem to understand.
My (very poor) understanding of AOP is that there are many aspects of producing a high-quality software system that don't fit neatly into a nice little cohesive package. Some classes, such as Loggers, Validators, DatabaseQueries, etc., will be used all over your codebase and thus will be highly-coupled. My (again, very poor) understanding of AOP is that it is concerned with the best practices of how to handle these types of "universally-coupled" packages.
Question : Is this true, or am I totally off? If I'm completely wrong, can someone please give a concise, laymen explanation for what AOP is, an example of a so-called aspect, and perhaps even provide a simple code example?
Separation of Concerns is a fundamental principle in software development, there is a classic paper by David Parnas On the Criteria To Be Used in Decomposing Systems into Modules that may introduce you to the subject and also read Uncle Bob's SOLID Principles.
But then there are Cross Cutting concerns that might be included in many use cases like authentication, authorization, validation, logging, transaction handling, exception handling, caching, etc that spawn all the layers in software. And if you want to tackle the problem without duplication and employing the DRY principle, you must handle it in a sophisticated way.
You must use declarative programming, that simply in .net could be annotating a method or a property by an attribute and what happened later is changing the behavior of code in runtime depending of those annotations.
You can find a nice chapter on this topic in Sommerville's Software engineering book
Useful links
C2 wiki CrossCuttingConcern, MSDN, How to Address Crosscutting Concerns in Aspect Oriented Software Development
AOP is a technique where we extract and remove the cross cutting concerns (logging, Exception handling, ....) from our code into it’s own aspect. leaving our original code focusing only on the business logic. not only this makes our code more readable, maintainable but also the code is DRY.
This can be better explained by an example:
Aspect Oriented Programming (AOP) in the .net world using Castle Windsor
or
Aspect Oriented Programming (AOP) in the .net world using Unity
AOP is about crosscutting concerns i.e. things that you need to do throughout the whole application. For instance logging. Suppose you want to trace when you enter and exit a method. This is very easy with aspects. You basically specify a "handler" for an event, such as entering a method. If necessary you can also specify with "wildcards" which methods you are interested in and then it is just a matter of writing the handler code, that could for instance log some info.
Aspect Oriented Programming is basically for separating the cross-cutting concerns (Non-functional) and develop it as aspects, like, security, logging, monitor etc., kept it aside whenever you need in your application, you can use it as plug & play. Only benefit we can achieve is clean code ,less code and programmers can focus on business logic(core concerns) , so that better modularity and quality system can be developed.