The Gnome library provides a number of functions for read write locks, these are g_rw_lock_writer_lock () and g_rw_lock_reader_lock () [https://developer.gnome.org/glib/stable/glib-Threads.html#g-rw-lock-writer-lock].
Is the implementation of these functions any close to what is described in this Wikipedia article [https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock]. More specifically, in which category these functions belong a Read-preferring RW, Write-preferring RW or unspecified?
Thanks
Yes, GRWLock implements a standard read/write lock as described in the Wikipedia article. It has unspecified priority rules.
On Unix systems, GRWLock is actually implemented using the pthread_rwlock_*() functions. These also have unspecified priority rules, but this at least means you know it will behave the same as most other read/write lock implementations on your system.
Related
An answer to Is Javascript a Functional Programming Language? said that:
Functional programming means that the program is conceptualized as a evaluation of a function, rather than a control flow. The code is a description of functions, and has no inherent concept of a control flow.
I've learnt that when a language supported first class functions, and had no control flows, from its design objective, it must be defined as a functional language.
So why does Smalltalk, a functional language, not support other functional features, such as immutability, algebraic data types, pattern matching, partial application?
Smalltalk was designed on top of the following features provided by the Virtual Machine
Object allocation: The #basicNew and #basicNew: primitives
Automatic deallocation: The GC
Message sends: The send family of bytecodes
Blocks: The [:arg | ...] syntax (see below)
Non-local returns: The [:arg | ... ^result] syntax
Late binding: The method lookup mechanism
Native code compilation: The interpreter (see below)
Modern implementations added
Block Closures: Which replaced blocks
Fast compilation: The JIT compiler, which replaced the interpreter
Stack unwind: The #ensure: message
Note that other "features" such as the Smalltalk Compiler, the Debugger or the Exception mechanism are not in the list because they can be derived from others (i.e., they are implemented in user code.)
These features where identified as the fundamental building blocks for a general purpose Object Oriented environment meant to run on the bare metal (i.e. with no Operating System support.)
What the designers had in mind wasn't Functional Programming. Instead they had in mind the every thing is an object and every computation is a message send uniform metaphor. To this end, blocks and non-local returns played the role of modeling "functions" as objects too so to make sure that every known concept got included in the OO paradigm. This doesn't mean that they had functional programming as a goal. They didn't include other features (functional or not) because they were trying to identify a minimal set of primitive elements that would support a general purpose system with no hindrances.
Following two statements are core of the Dependency Inversion Principle(DIP):
"High-level modules should not depend on low-level modules. Both should depend on abstractions."
"Abstractions should not depend on details. Details should depend on abstractions."
I read different books and article about DIP; all of them explained the first statement but none of them explain the second statement: "Abstractions should not depend on details. Details should depend on abstractions". Please explain what exactly are the meaning of this second statement.
It just means that you don't want to change an abstraction just because a detail has changed because details are likely to change.
Because both high-level and low-level modules depend on abstractions, they will also have to be changed whenever a detail has changed. This would obviously be undesirable.
Don't decide on interface (abstraction) by looking at implementation (details) first.
e.g. You can define a Repository interface. But while designing Repository interface, you should not decide on interface (abstraction) by looking at specific solutions like SQL implementation or NoSQL implementation ( details).
Let the Repository interface is generic and SQL features Or NoSQL feature implementation should be specific.
You will get clarity about the second statement if you read this article by Martin Fowler
Switch out the repository for a different storage mechanism, there's no mention of SQL in its interface so we can use an in-memory solution, a NoSql solution or a RESTful service.
You should think of "details" as "implementations":
- if you declare some interface, it does not depend on its future implementation classes.
- in the other hand, implementation classes should reference their interface and implement its methods, so they depends on it.
Are there languages that idiomatically use both notions at the same time? When will that be necessary if ever? What are the pros and cons of each approach?
Background to the question:
I am a novice (with some python knowledge) trying to build a better picture of how multimethods and interfaces are meant to be used (in general).
I assume that they are not meant to be mixed: Either one declares available logic in terms of interfaces (and implements it as methods of the class) or one does it in terms of multimethods. Is this correct?
Does it make sense to speak of a spectrum of OOP notions where:
one starts with naive subclassing (data and logic(methods) and logic implementation(methods) are tightly coupled)
then passes through interfaces (logic is in the interface, data and logic implementation is in the class)
and ends at multimethods (logic is in the signature of the multimethod, logic implementation is scattered, data is in the class(which is only a datastructure with nice handles))?
This answer, to begin, largely derives from my primary experience developing in common-lisp and clojure.
Yes, multimethods do carry some penalty in cost, but offer almost unlimited flexibility in the ability to craft a dispatch mechanism that precisely models whatever you might look to accomplish by their specialization.
Protocols and Interfaces, on one hand, are also involved with sone of these same matters of specializations and dispatch, but they work and are used in a very different manner. These are facilities that follow a convention wherein single dispatch provides only a straightforward mapping of one specialized implementation for a given class. The power of protocols and interfaces is in their typical use to define some group of abstract capabilities that, when taken together, fully specify the API for thus concept. For example, a "pointer" interface might contain the 3 or 4 concepts that represent the notion of what a pointer is. So the general interface of a pointer might look like REFERENCE, DEREFERENCE, ALLOCATE, and DISPOSE. Thus the power of an interface comes from its composition of a group of related definitions that, together, express a compete abstraction -- when implementing an interface in a specific situation, it is normally an all-or-nothing endeavor. Either all four of those functions are present, or whatever this thing us does not represent our definition of pointer.
Hope this helped a little.
Dan Lentz
I have a Component which has API exposed with some 10 functionality in all. I can think of two ways to achieve it:
Give out all these functionality as separate functions.
Expose only one function which takes an XML as input. Based on request_Type specified and the parameters passed in the XML, I internally call one of the respective functions.
Q1. Will the second design be more loosely coupled than the first ?
I always read about how I should try my components to be loosely coupled, should I really go to this extent to achieve lose coupling ?
Q2. Which one of these would be a better design in terms of OOP and why?
Edit:
If I am exposing this API over D-Bus for others to use, will type checking still be a consideration to compare the two approaches? From what I understand type checking is done at compile time, but in case when this function is exposed over some IPC, issue of type checking comes into picture ?
The two alternatives you propose do not differ in the (obviously quite large) number of "functions" you want to offer from your API. However, the second seems to have many disadvantages because you are loosing any strong type checking, it will become much harder to document the functionality etc. (The only advantage I see is that you don't need to change your API if you add functionality. But at the disadvantage that users will not be able to figure out API changes like deleted functions until run-time.)
What is more related with this question is the Single Responsiblity Principle (http://en.wikipedia.org/wiki/Single_responsibility_principle). As you are talking about OOP, you should not expose your tens of functions within one class but split them among different classes, each with a single responsibility. Defining good "responsibilities" and roles requires some practice, but following some basic guidelines will help you to get started quickly. See Are there any rules for OOP? for a good starting point.
Reply to the question edit
I haven't used D-Bus, so this might be totally wrong. But from a quick look at the tutorial I read
Each object supports one or more interfaces. Think of an interface as
a named group of methods and signals, just as it is in GLib or Qt or
Java. Interfaces define the type of an object instance.
DBus identifies interfaces with a simple namespaced string, something
like org.freedesktop.Introspectable. Most bindings will map these
interface names directly to the appropriate programming language
construct, for example to Java interfaces or C++ pure virtual classes.
As far as I understand, D-Bus has the concept of differnt objects which provide interfaces consisting of several methods. This means (to me) that my answer above still applies. The "D-Bus native" way of specifying your API would mean to exhibit interfaces and I don't see any reason why good OOP design guidelines shouldn't be valid, here. As D-Bus seems to map these even to native language constructs, this is even more likely.
Of course, nobody keeps you from just building your own API description language in XML. However, things like are some kind of abuse of underlying techniques. You should have good reasons for doing such things.
Relatively new to Cocoa here.
This question is about NSFileHandle, but I got a feeling the answer may be relevant in a broader Cocoa programming context.
I'm just wondering:
why there are different NSFileHandle constructor flavors (ie: one each for reading, writing and both).
how the control of access to these file manipulation functions is implemented, especially given that all of these constructors return generic (id) that don't give away at all whether they are of R/W/RW type.
Thanks!
1) Because on most operating systems (Mac OS X/iOS included), reading and writing are two separate operations, and a file handle that can do one is generally not able to do the other (unless explicitly opened with both access types.)
2) We don't know how NSFileHandle is implemented. :) Or maybe we do know, but it's an implementation detail, so even if we know we should pretend we don't.