Discussion:
Serial-functional-transformational programming: concatinative?
(too old to reply)
n***@gmail.com
2011-02-07 02:40:52 UTC
Permalink
This is a brainstorm and may well be completely absurd.

It is prompted by the nagging reminder that I aborted an attempt to
program in Oberon [a refinment of Pascal] an utility to,
in the text-stretch [which could perhaps be <the stack>] :
* remove any ">"
* remove all eol, to make one long line
* wrap the lines at word-boundries to make lines of length < N
* IF the start of the text-stretch was: ">" <N spaces>
THEN let all the lines start with ">" <N spaces>.

The conventional approach of thinking in term of a Char-array and pointers
is annoyingly frustrating; and of course the 4 steps above was not the
Algol-like algorithm, but rather something like the cascaded/concatinated
off-the-shelf linux utilities, which eventually did it -- easily.

That the unix-boys can regularly use these dazzeling one-liners with great
effect, is only possible, because the appropriate library of text-transformers
ARE already available.

But how was the library designed/selected?
Was this done empirically, after long experience, or is there a theoretical
basis?

I imagine [without the inhibition of giving it much thought] how slick it
would be if:
joy-like, the text-stretch/some-lines could be passed back & forth, like
a stack;
and the appropriate transforming functions could do their job.
[Sorry for the pun: forth].

Eg.
* push the marked/selected text-stretch to the stack;
* do Transformer1 -- on the stack;
* do Transformer2 -- on the stack;
* ...TransformerN ....
* replace the marked/selected text-stretch by the stack.

Again the labour-efficient aspect of being able to test each 'stage'
independantly, while evolving the algorithm/transforming-sequence, makes for
good productivety.

I've read that 'Factor' and other modern languages have evolved vast libraries
of utilities; which seems to just change the original problem into that of
managing libraries.

OTOH, let me now browse some linux-scripts to confirm that they typically
use only a few text-transformers:

PS. the algorithm to browse for linux-scripts, also uses a cat-like approach:
* think of that little linux partition which has such scripts;
* list the partitions to be able to select its name;
* think-of/remember some typical *nix-transformers which could be part of
typical one-liners; eg. sed, grep, tr, cut...;
* use a tool to find/display/paste-here some one-liners, for analysis.

Using my on-hand one-liner which does:
* search in the dir-tree,
* for all files which contain *nix-transformer1,
* and *nix-transformer2;
will likely find some suitably dazzeling one-liners.

As a non-expert *nix user, I'm using this found one-liner as an example:
nrusers="`who|cut -c 1-8|sort -u|wc -l|sed 's/^[ ]*//'`"
of how the data is successively transformed, and easily testable at each
of the 6 transformation stages.

`who` lists the info about the 'users'.
`who |wc -l` = 26, for me now; meaning there are 26 lines of info.
`cut -c 1-8` extracts the user's names only: chars 1 to 8.
`sort -u` apparently orders the names and removes duplicate lines;
`wc -l` counts and prints the number of lines.
`sed 's/^[ ]*//' deletes zero-or-more leading spaces
<enclosing back-quotes> means <execute the string/commands inside of 'them'>
`=` means 'assign the RHS to the variable on the RHS; [I wonder why
pop11's syntax of "2 -> <variableName>" never caught on.]

So apparently the one-liners reads the <user info> and in my case extracts
that the NumberOfUsers = 2, which is assigned to the variable 'nrusers'.

Unfortumately IMO, the baroque syntax of *nix, conceals the clean, underlying:
A -> B -> C -> D -> E -> F : 6 stage data transformation
structure. But I hope I've expalined what I mean.

IMO the above method, which the *nix-boys have been using for decades is
very efficient ito human effort, and with a more regular syntax, and perhaps
a theoretical foundation, could be used as a basis of a VM/language for
text manipulation, which is similarly efficient.

Conclusion.
There are several on-line articles expalaining various aspects of these
so-called concatinative languages, but I've found none which try to
explain WHY they may be more productive than other approaches.

Of course that get's into fuzzy-stuff like psychology/cog-science, but Backus'
award winning publication of ca. '78 gave/implied some such good reasons.

== Chris Glur.
Bill James
2011-02-07 11:48:11 UTC
Permalink
     nrusers="`who|cut -c 1-8|sort -u|wc -l|sed 's/^[        ]*//'`"
of how the data is successively transformed, and easily testable at each
of the 6 transformation stages.
`who` lists the info about the 'users'.
`who |wc -l` = 26, for me now; meaning there are 26 lines of info.
`cut -c 1-8` extracts the user's names only: chars 1 to 8.
`sort -u` apparently orders the names and removes duplicate lines;
`wc -l` counts and prints the number of lines.
`sed 's/^[        ]*//' deletes zero-or-more leading spaces
<enclosing back-quotes> means <execute the string/commands inside of 'them'>
`=` means  'assign the RHS to the variable on the RHS; [I wonder why
pop11's syntax of "2 -> <variableName>" never caught on.]
So apparently the one-liners reads the <user info> and in my case extracts
that the NumberOfUsers = 2, which is assigned to the variable 'nrusers'.
  A -> B -> C -> D -> E -> F  : 6 stage data transformation
structure. But I hope I've expalined what I mean.
In Clojure, you can do

user=> (->> '(2 2 5 4 5 3 9 7) sort distinct (filter odd?))
(3 5 7 9)

The "->>" operator passes the data item as the last argument
to each succeeding function. There is also "->", which passes
the data as the first argument.

user=> (-> '(2 5 4 5 3) sort distinct (concat '(-x-)))
(2 3 4 5 -x-)
user=> (->> '(2 5 4 5 3) sort distinct (concat '(-x-)))
(-x- 2 3 4 5)

Loading...