Do I need to learn haskell to write plutus in Cardano? - smartcontracts

https://cardanodocs.com/technical/plutus/introduction/
Should I learn plutus? I believe it is a language whereby I can write smart contracts on Cardano

Plutus is a haskell framework for compiling Smart Contract code in order to run on the cardano blockchain. In as much as you need to know Javascript in order to learn React Js you will need to learn functional programming (haskell, scala, etc) in order to write for plutus.

As per docs said:
Plutus is the native smart contract language for Cardano. It is a Turing-complete language written in Haskell, and Plutus smart contracts are effectively Haskell programs. By using Plutus, you can be confident in the correct execution of your smart contracts. It draws from modern language research to provide a safe, full-stack programming environment based on Haskell, the leading purely-functional programming language.
So, if you want to feel confident programming Plutus, it's recommended to learn Haskell.

Plutus is similar to Haskell, but it is not Haskell. You don't need to learn one to learn the other. Moreover, Plutus is still in development, the language that the Cardano testnet currently supports is Solidity. My guess is that they will be supporting Solidity for a long time, even after Plutus gets released. So, strictly speaking, you don't even need to learn Plutus to write smart contracts on Cardano.

Related

Why do almost all OO languages compile to bytecode?

Of the object-oriented languages I know, pretty much all but C++ and Objective-C compile to bytecode running on some sort of virtual machine. Why have so many different languages settled on compiling to bytecode, as opposed to machine code? Is it possible in princible to have a high-level memory-managed OOP language that compiled to machine code?
Edit: I'm aware that multiplatform support is often advanced as an advantage of this approach. However, it's quite possible to compile natively on multiple platforms, without making a new compiler per platform. One can, per example, emit C code and then compile that with GCC.
There's no reason in fact, this is a kind of coincidence. OOP now is the leading concept in "big" programming, and so virtual machines are.
Also note, that there are 2 distinct parts of traditional virtual machines - garbage collector and bytecode interpreter/JIT-compiler, and these parts can exist separately. For example, Common Lisp implementation called SBCL compiles program to a native code, but at runtime heavily uses garbage collection.
This is done to allow a VM or JIT compiler the chance to compile the code on demand optimally for the architecture on which the code is executed. Also, it allows for cross-platform bytecode to be created once and then executed on multiple hardware architectures. This allows for hardware specific optimizations to be placed into the compiled code.
Since byte code is not limited to a microarchitecture, it can be smaller than machine code. Complex instructions can be represented vs. the much more primitive instructions available in modern day CPUs, since the constraints in the design of CPU instructions are very different from the constraints in designing a bytecode architecture.
Then there's the issue of security. The bytecode can be verified and analyzed prior to execution (i.e., no buffer overflows, variables of a certain type being accessed as something they are not), etc...
Java uses bytecode because two of its initial design goals were portability and compactness. Those both came from the initial vision of a language for embedded devices, where fragments of code could be downloaded on the fly.
Python, Ruby, Smalltalk, javascript, awk and so on use bytecode because writing a native compiler is a lot of work, but a textual interpreter is too slow - bytecode hits a sweet spot of being fairly easy to write, but also satisfactorily quick to run.
I have no idea why the Microsoft languages use bytecode, since for them, neither portability nor compactness is a big deal. A lot of the thinking behind the CLR came out of computer scientists in Cambridge, so i imagine considerations like ease of program analysis and verification were involved.
Note that as well as C++ and Objective C, Eiffel, Ada 9X, Vala and Go are OO languages (of varying vintage) that are compiled straight to native code.
All in all, i'd say that OO and bytecode do not go hand in hand. Rather, we have a coincidental convergence of several streams of development: the traditional bytecoded interpreters of scripting languages like Python and Ruby, the mad Gosling masterplan of Java, and whatever it is Microsoft's motives are.
The biggest reason why most interpreted languages (not specifically OO languages) are compiled to bytecode is for performance. The most expensive part of interpreting code is transforming text source to an intermediate representation. For instance, to perform something like:
foo + bar;
The interpreter would have to scan 10 characters, transform them into 4 tokens, build an AST for the operation, resolve three symbols (+ is a symbol, which depends on the types of foo and bar), all before it can perform any action that actually depends on the run-time state of the program. None of this can change from run to run, and so many languages try to store some form of intermediate representation.
bytecode, rather than storing an AST has a few advantages. For one, bytecodes are easy to serialize, so the IR can be written to disk and reused at the next invocation, further reducing interpretation time. Another reason is that bytecode often takes up less actual ram. significantly bytecode representations are often easy to just in time compile, because they are often structurally similar to typical machine code.
As another data point, the D programming language is GC'ed, OO, and a lot higher level than C++ while still being compiled to native code.
Bytecode is significantly more flexible medium than machine code. First, it provides the basis for platform portability without the need for a compiler or shipping source code. So a developer can distribute a single version of the application without needing to give up the source, require complex developer tools, or anticipate potential target platforms. While the later is not always practical it does happen. Especially with developer libraries say I distribute a library that I've only tested on Windows, but someone else uses it on Linux or Android. It happens quite frequently actually, and most of the time it works as expected.
Byte code is also generally more optimized that an interpreter because it's closer to machine instructions therefore faster to translate to machine instructions. Not all OO languages are compiled. Ruby, Python, and even Javascript are interpreted so they aren't compiled to anything so the ruby interpreter has to take a very flexible language and turn that into instructions, but that flexibility comes at a price paid an runtime: parse text, generate AST, translate AST to machine code, etc. It's also easy to do optimizations like JIT where byte code is translated to machine code directly, and even gives the possibility for creating optimizations for specific hardware.
Finally, just because one language compiles to bytecode doesn't preclude other languages taking advantage of of that byte code. Now any optimization using that byte code can be applied to these other languages that might know how to translate themselves to that byte code. That makes the byte code a very important layer for reusability for other languages.
OO and byte code compilation goes back to the 70s with Smalltalk, and I'm sure someone will say LISP as early as the 50s/60s. But, it really wasn't until the 90s that it started to really be used in production systems on a large scale.
Native compilation sounds like the optimal path, and probably why our industry spent 20 years or more thinking that was THE ANSWER to all our problems, but the last 15 years we've seen byte code compilation take stage and it's been a significant advantage over what we did before. Looking back we realize how much time wasted natively compiling everything mostly by hand.
I agree with Chubbard's answer and I'd add that in OO languages type information can be very important for enabling optimizations by virtual-machines or last-level compilers
It is easier to develop an interpreter than a compiler.
Effort in development of...:
interpreter < bytecode-interpreter < bytecode-jit-compiler < compiler-to-platform-independent-language < compiler-to-multiple-machine-dependent-assembler.
It is a general trend to stop the development at jit-compilers because of platform independence. Only the preferred languages in respect to performance and research in theoretical computer science are and will be developed in ALL possible directions, including new bytecode-interpreter, even while there are good and advanced compilers to platform independent languages and to different machine-dependant assemblers.
The research in OOP languages is pretty ...let's say dull, compared to functional languages, because really new language and compiler technologies are more easily expressed with/in/using mathematical cathegory theory and mathematical descriptions of touring-complete type-systems. In other words: it is nearly functional in itself, while imperative languages are nearly only assembler-frontends with some syntactic sugar. OOP languages tend to be imperative languages, because functional languages have already closures and lambda. There are other ways to implement java-like "interfaces" in functional languages, and there is just no need for additional object oriented features.
In i.e. Haskell, adding the feature of OOP-like programming would probably be more than only a few steps back in technology – there would be no point in using that. (<- that is not only IMHO... you ever heard of GADTs or Multi-parameter-type-classes?) Probably there might be even better ways to dynamically create Objects with Interfaces to communicate with OOP-languges than changing that language itself. But there are other functional languages, too, that explicitely combine functional and OOP aspects. There is just more science with mainly functional languages than non-functional OO-languages.
OO languages can not be easily compiled to other OO languages, iff they are in some way more "advanced". Usually, they have features like stack-protector, advanced debugging abilities, abstract and inspectable multi-threading, dynamic object-loading from files from the internet... Many of these features are not or not-easily realisable with C or C++ as compiler-backend. The functional language LISP (which is 50 years old!) was AFAIK the first with garbage collector. As compiler-backend LISP used a hacked version of the language C, because plain C did not allow some of those things, assembler did allow, i.e. proper-tail-calls or tables-next-to-code. C-- allows that.
An other aspect: Imperative languages are intended to run on a specific architecture, i.e. C and C++ programs run on only those architectures, they are programmed for. Java is more extreme: it runs only on a single architecture, a virtual one, which itself runs on others.
Functional languages are usually by design pretty architecture-independent: LISP was developed to be so immense architecture-unspecific, that it could be compiled to genetic code, in some distant future. Yes, like programs running in living biologic cells.
With the bytecode for the LLVM, functional languages will most-likely be compiled to bytecode in the future, too. Most imperative languages will most likely still have the same inherited problems as they have now from not-abstracting-far-enough. Well, I'm not that sure about clang and D, but those two are not "the most" anyway.

Methodologies for designing a simple programming language

In my ongoing effort to quench my undying thirst for more programming knowledge I have come up with the idea of attempting to write a (at least for now) simple programming language that compiles into bytecode. The problem is I don't know the first thing about language design. Does anyone have any advice on a methodology to build a parser and what the basic features every language should have? What reading would you recommend for language design? How high level should I be shooting for? Is it unrealistic to hope to be able to include a feature to allow one to inline bytecode in a way similar to gcc allowing inline assembler? Seeing I primarily code in C and Java which would be better for compiler writing?
There are so many ways...
You could look into stack languages and Forth. It's not very useful when it comes to designing other languages, but it's something that can be done very quickly.
You could look into functional languages. Most of them are based on a few simple concepts, and have simple parsing. And, yet, they are very powerful.
And, then, the traditional languages. They are the hardest. You'll need to learn about lexical analysers, parsers, LALR grammars, LL grammars, EBNF and regular languages just to get past the parsing.
Targeting a bytecode is not just a good idea – doing otherwise is just insane, and mostly useless, in a learning exercise.
Do yourself a favour, and look up books and tutorials about compilers.
Either C or Java will do. Java probably has an advantage, as object orientation is a good match for this type of task. My personal recommendation is Scala. It's a good language to do this type of thing, and it will teach you interesting things about language design along the way.
You might want to read a book on compilers first.
For really understanding what's going on, you'll likely want to write your code in C.
Java wouldn't be a bad choice if you wanted to write an interpreted language, such as Jython. But since it sounds like you want to compile down to machine code, it might be easier in C.
I recommend reading the following books:
ANTLR
Language Design Patterns
This will give you tools and techniques for creating parsers, lexers, and compilers for custom languages.

Are there any decent scripting languages that use functional programming?

I've been reading a bit about functional programming recently and am keen to get have a bit of a play. are there any decent scripting languages that support functional programming? I find that the bulk of my ad-hoc programming is done in Python, so I thought I might be able to do the same with a functional language. Any recommendations?
Lua appears to fit your needs:
Lua (pronounced /ˈluː.ə/ LOO-uh) is a
lightweight, reflective, imperative
and functional programming language,
designed as a scripting language with
extensible semantics as a primary
goal.
Scala can also be used as a scripting language. It runs on the JVM and supports both imperative OO and functional programming. Using this you can have access to the entire Java class library.
Python can be written in a functional style, as can JavaScript. If you mean something more purely functional, then you could try Haskell.
GNU's Guile can be used as a stand-alone script interpreter, see this FAQ entry for the details. Not sure how much general programming support is in Guile, though, but it could at least get you started quickly with something that should look and feel like a "traditional" functional language.
Perl can do functional style programming very well. It isn't a pure functional language by any means, but it supports quite a lot of functional idioms. The classic full-length treatment is Mark Jason Dominus's Higher Order Perl, which is now available freely online.
For briefer introductions, take a look at these slides:
Functional programming in Perl
Introduction to functional programming in Perl
It depends on what you mean by "scripting language." It isn't commonly viewed that way, but many Scheme implementations seem to fit the bill as well as Python, and Lisp is sort of the archetypal functional language.
Julia language. Also not just a "scripting" language, as fast as C.
See my answer here: https://www.quora.com/Whats-a-good-scripting-functional-programming-language/answer/Páll-Haraldsson
I recently work on a functional scripting language and already finished the first version. It is a bit like a haskell/perl combination and therefore nice for scripting and mathematical problems, too.
For example here is a code snippet demonstrating how easy it is:
5 times {echo["Iteration: " concat str[x]]}
If you are interested, you can give it a try: http://ac1235.github.io
Kotlin is a practical functional language, mainly for the JVM. It has a scripting flavor called kscript. I've used it for shell scripting in personal projects. Simple example:
#!/usr/bin/env kscript
args.forEach { arg -> println("arg: $arg") }
Run it:
> ./example.kts hey you
arg: hey
arg: you
Drawbacks:
Requires JVM
Slow startup
For more info:
https://github.com/holgerbrandl/kscript
Search stackoverflow for kscript
If you're comfortable with the JVM and like functional programming, kscript is worth exploring.

Do you think functional language is good for applications that have a lot of business rules but very few computation? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am convinced that functional programming is an excellent choice when it comes to applications that require a lot of computation (data mining, AI, nlp etc).
Is functional programming being used in any well known enterprise applications or open source projects? How did they incorporate business logic into the functional design?
Please disregard the fact that there are very few people using functional programming and that it's kind of tough.
Thanks
Functional programming languages like Clojure and Scala are good for pretty much anything. As for Haskell, an experienced Haskell programming would probably be able to substitute Haskell with any language for any problem - Efficient or not. I don't know if there is a functional programming language that could be considered /best/ out of all languages for this specific problem, but rest assured it will work and very well at that.
Also, Clojure and Scala are implemented on the JVM. So technically they /are/ on an enterprise platform.
What are business rules if not functions? Application of rules can be expressed as applying a function to a set of data. It can also be combined with polymorphism. e.g. through generic functions (multiple dispatch can be handy, too) and inheritance.
Code is data, data is code, and both should be like water.
From what I've seen, Scala looks like it handles normal Java just fine. Hence, anything that Java can handle for business, Scala could too.
On the .NET side, F# is another great example of a functional language that works fine for "business" applications. To put it simply, F# can do everything C# can do, and more, easier.
But for both of these languages, the "programming in the large" side tends to borrow from OOP. Not that there's anything wrong with mixing things, but perhaps thats not what you asked. If you want to stick to a more functional approach, and say, not use objects, you could run into a bit more hassle because the tooling support won't be on the same level. With languages that easily integrate with .NET/Java, that's not as big an issue.
As far as "is it wise?": That depends on the project, company, and other environmental factors. It seems that a common "enterprise pattern" is that code has to be extremely dumbed down so that anyone can work on it. In that case, you might get people involved who'd think that using a lambda makes it too difficult for others to understand.
But is it wise to use functional programming for a typical enterprise application where there are a lot of business rules but not much in terms of computation?
Business rules are just computation and you can often express them more succinctly and clearly using functional programming.
A growing number of enterprise apps are written in functional languages. Citrix XenDesktop and XenServer are built upon a tool stack written primarily in OCaml. The MyLife people search engine is written in OCaml. We are a small company but all of our LOB software (e.g. credit-card transactions, accounts, web analytics) are written in F#. Microsoft's ads on Bing use F# code. Perhaps the most obvious example is anyone using recent versions of C# and .NET because they are almost certainly using functional concepts (e.g. delegates).
If you mean more exotic functional languages such as Clojure, Scala and Haskell then I believe some people are using them but I do not have any details myself.
More than a year ago I delved a bit into Haskell and also tried a few things that I would regard as a typical business problem (To put it bluntly, given a number of values, what is the correct response?). Hence, I would say, yes, you should be able to model a number of business problems with functional programming.
Personally I couldn't find the same obviousness in Haskell to which I can push a OO + functional approach like with C# , but this could well be because I haven't done much with Haskell and a lot more with C#.
Then there is the thing how to communicate with a customer. My experience is that many of them think in strictly chronological terms, which kind of favours imperative programming. Even when going into models of state changes etc. you can lose the odd customer. Thinking along function compositions and monads that may represent the chronological operations of the business could probably be beyond many,many customers.
Either way, you can find my business-y example here.
I assume when you talk about a lot of business rules you are thinking about application development. Application development in the sense that you want to model a real-world workflow. Unlike vanilla programming, application development involves higher levels of responsibility (particularly for requirement capturing and testing). If so, I strongly suggest to see if you could apply domain-driven development. A natural choice for domain-driven development is an object-oriented approach. This and the fact that a lot of programmers are decent at object-orientated programming is one reason for its popularity in application development. However, this does not mean that real-world, big-scale projects are always written this way (read http://www.paulgraham.com/avg.html).
You might want to check out the iTasks system which is a library for the functional language Clean and is designed exactly to express workflow and business processes.

OOP vs Functional Programming vs Procedural [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What are the differences between these programming paradigms, and are they better suited to particular problems or do any use-cases favour one over the others?
Architecture examples appreciated!
All of them are good in their own ways - They're simply different approaches to the same problems.
In a purely procedural style, data tends to be highly decoupled from the functions that operate on it.
In an object oriented style, data tends to carry with it a collection of functions.
In a functional style, data and functions tend toward having more in common with each other (as in Lisp and Scheme) while offering more flexibility in terms of how functions are actually used. Algorithms tend also to be defined in terms of recursion and composition rather than loops and iteration.
Of course, the language itself only influences which style is preferred. Even in a pure-functional language like Haskell, you can write in a procedural style (though that is highly discouraged), and even in a procedural language like C, you can program in an object-oriented style (such as in the GTK+ and EFL APIs).
To be clear, the "advantage" of each paradigm is simply in the modeling of your algorithms and data structures. If, for example, your algorithm involves lists and trees, a functional algorithm may be the most sensible. Or, if, for example, your data is highly structured, it may make more sense to compose it as objects if that is the native paradigm of your language - or, it could just as easily be written as a functional abstraction of monads, which is the native paradigm of languages like Haskell or ML.
The choice of which you use is simply what makes more sense for your project and the abstractions your language supports.
I think the available libraries, tools, examples, and communities completely trumps the paradigm these days. For example, ML (or whatever) might be the ultimate all-purpose programming language but if you can't get any good libraries for what you are doing you're screwed.
For example, if you're making a video game, there are more good code examples and SDKs in C++, so you're probably better off with that. For a small web application, there are some great Python, PHP, and Ruby frameworks that'll get you off and running very quickly. Java is a great choice for larger projects because of the compile-time checking and enterprise libraries and platforms.
It used to be the case that the standard libraries for different languages were pretty small and easily replicated - C, C++, Assembler, ML, LISP, etc.. came with the basics, but tended to chicken out when it came to standardizing on things like network communications, encryption, graphics, data file formats (including XML), even basic data structures like balanced trees and hashtables were left out!
Modern languages like Python, PHP, Ruby, and Java now come with a far more decent standard library and have many good third party libraries you can easily use, thanks in great part to their adoption of namespaces to keep libraries from colliding with one another, and garbage collection to standardize the memory management schemes of the libraries.
These paradigms don't have to be mutually exclusive. If you look at python, it supports functions and classes, but at the same time, everything is an object, including functions. You can mix and match functional/oop/procedural style all in one piece of code.
What I mean is, in functional languages (at least in Haskell, the only one I studied) there are no statements! functions are only allowed one expression inside them!! BUT, functions are first-class citizens, you can pass them around as parameters, along with a bunch of other abilities. They can do powerful things with few lines of code.
While in a procedural language like C, the only way you can pass functions around is by using function pointers, and that alone doesn't enable many powerful tasks.
In python, a function is a first-class citizen, but it can contain arbitrary number of statements. So you can have a function that contains procedural code, but you can pass it around just like functional languages.
Same goes for OOP. A language like Java doesn't allow you to write procedures/functions outside of a class. The only way to pass a function around is to wrap it in an object that implements that function, and then pass that object around.
In Python, you don't have this restriction.
For GUI I'd say that the Object-Oriented Paradigma is very well suited. The Window is an Object, the Textboxes are Objects, and the Okay-Button is one too. On the other Hand stuff like String Processing can be done with much less overhead and therefore more straightforward with simple procedural paradigma.
I don't think it is a question of the language neither. You can write functional, procedural or object-oriented in almost any popular language, although it might be some additional effort in some.
In order to answer your question, we need two elements:
Understanding of the characteristics of different architecture styles/patterns.
Understanding of the characteristics of different programming paradigms.
A list of software architecture styles/pattern is shown on the software architecture article on Wikipeida. And you can research on them easily on the web.
In short and general, Procedural is good for a model that follows a procedure, OOP is good for design, and Functional is good for high level programming.
I think you should try reading the history on each paradigm and see why people create it and you can understand them easily.
After understanding them both, you can link the items of architecture styles/patterns to programming paradigms.
I think that they are often not "versus", but you can combine them. I also think that oftentimes, the words you mention are just buzzwords. There are few people who actually know what "object-oriented" means, even if they are the fiercest evangelists of it.
One of my friends is writing a graphics app using NVIDIA CUDA. Application fits in very nicely with OOP paradigm and the problem can be decomposed into modules neatly. However, to use CUDA you need to use C, which doesn't support inheritance. Therefore, you need to be clever.
a) You devise a clever system which will emulate inheritance to a certain extent. It can be done!
i) You can use a hook system, which expects every child C of parent P to have a certain override for function F. You can make children register their overrides, which will be stored and called when required.
ii) You can use struct memory alignment feature to cast children into parents.
This can be neat but it's not easy to come up with future-proof, reliable solution. You will spend lots of time designing the system and there is no guarantee that you won't run into problems half-way through the project. Implementing multiple inheritance is even harder, if not almost impossible.
b) You can use consistent naming policy and use divide and conquer approach to create a program. It won't have any inheritance but because your functions are small, easy-to-understand and consistently formatted you don't need it. The amount of code you need to write goes up, it's very hard to stay focused and not succumb to easy solutions (hacks). However, this ninja way of coding is the C way of coding. Staying in balance between low-level freedom and writing good code. Good way to achieve this is to write prototypes using a functional language. For example, Haskell is extremely good for prototyping algorithms.
I tend towards approach b. I wrote a possible solution using approach a, and I will be honest, it felt very unnatural using that code.