From 81ebe649f0134a66137d3266fcd2e1b93d9e0ba3 Mon Sep 17 00:00:00 2001
From: cvs2git convertor
Date: Thu, 11 Oct 2007 21:16:29 +0000
Subject: [PATCH 01/23] This commit was manufactured by cvs2svn to create tag
'luasocket-2-0-2'.
Sprout from master 2007-10-11 21:16:28 UTC Diego Nehab 'Tested each sample.'
Cherrypick from master 2007-05-31 22:27:40 UTC Diego Nehab 'Before sending to Roberto.':
gem/ltn012.tex
gem/makefile
---
gem/ltn012.tex | 351 +++++++++++++++++++++++--------------------------
gem/makefile | 9 --
2 files changed, 163 insertions(+), 197 deletions(-)
diff --git a/gem/ltn012.tex b/gem/ltn012.tex
index 8eccd46..0f81b86 100644
--- a/gem/ltn012.tex
+++ b/gem/ltn012.tex
@@ -6,10 +6,7 @@
\DefineVerbatimEnvironment{mime}{Verbatim}{fontsize=\small,commandchars=\$\#\%}
\newcommand{\stick}[1]{\vbox{\setlength{\parskip}{0pt}#1}}
\newcommand{\bl}{\ensuremath{\mathtt{\backslash}}}
-\newcommand{\CR}{\texttt{CR}}
-\newcommand{\LF}{\texttt{LF}}
-\newcommand{\CRLF}{\texttt{CR~LF}}
-\newcommand{\nil}{\texttt{nil}}
+
\title{Filters, sources, sinks, and pumps\\
{\large or Functional programming for the rest of us}}
@@ -20,31 +17,30 @@
\maketitle
\begin{abstract}
-Certain data processing operations can be implemented in the
-form of filters. A filter is a function that can process
-data received in consecutive invocations, returning partial
-results each time it is called. Examples of operations that
-can be implemented as filters include the end-of-line
-normalization for text, Base64 and Quoted-Printable transfer
-content encodings, the breaking of text into lines, SMTP
-dot-stuffing, and there are many others. Filters become
-even more powerful when we allow them to be chained together
-to create composite filters. In this context, filters can be
-seen as the internal links in a chain of data transformations.
-Sources and sinks are the corresponding end points in these
-chains. A source is a function that produces data, chunk by
-chunk, and a sink is a function that takes data, chunk by
-chunk. Finally, pumps are procedures that actively drive
-data from a source to a sink, and indirectly through all
-intervening filters. In this article, we describe the design of an
-elegant interface for filters, sources, sinks, chains, and
-pumps, and we illustrate each step with concrete examples.
+Certain data processing operations can be implemented in the
+form of filters. A filter is a function that can process data
+received in consecutive function calls, returning partial
+results after each invocation. Examples of operations that can be
+implemented as filters include the end-of-line normalization
+for text, Base64 and Quoted-Printable transfer content
+encodings, the breaking of text into lines, SMTP dot-stuffing,
+and there are many others. Filters become even
+more powerful when we allow them to be chained together to
+create composite filters. In this context, filters can be seen
+as the middle links in a chain of data transformations. Sources an sinks
+are the corresponding end points of these chains. A source
+is a function that produces data, chunk by chunk, and a sink
+is a function that takes data, chunk by chunk. In this
+article, we describe the design of an elegant interface for filters,
+sources, sinks, and chaining, and illustrate each step
+with concrete examples.
\end{abstract}
+
\section{Introduction}
Within the realm of networking applications, we are often
-required to apply transformations to streams of data. Examples
+required apply transformations to streams of data. Examples
include the end-of-line normalization for text, Base64 and
Quoted-Printable transfer content encodings, breaking text
into lines with a maximum number of columns, SMTP
@@ -54,10 +50,11 @@ transfer coding, and the list goes on.
Many complex tasks require a combination of two or more such
transformations, and therefore a general mechanism for
promoting reuse is desirable. In the process of designing
-\texttt{LuaSocket~2.0}, we repeatedly faced this problem.
-The solution we reached proved to be very general and
-convenient. It is based on the concepts of filters, sources,
-sinks, and pumps, which we introduce below.
+\texttt{LuaSocket~2.0}, David Burgess and I were forced to deal with
+this problem. The solution we reached proved to be very
+general and convenient. It is based on the concepts of
+filters, sources, sinks, and pumps, which we introduce
+below.
\emph{Filters} are functions that can be repeatedly invoked
with chunks of input, successively returning processed
@@ -65,33 +62,34 @@ chunks of output. More importantly, the result of
concatenating all the output chunks must be the same as the
result of applying the filter to the concatenation of all
input chunks. In fancier language, filters \emph{commute}
-with the concatenation operator. More importantly, filters
-must handle input data correctly no matter how the stream
-has been split into chunks.
+with the concatenation operator. As a result, chunk
+boundaries are irrelevant: filters correctly handle input
+data no matter how it is split.
-A \emph{chain} is a function that transparently combines the
-effect of one or more filters. The interface of a chain is
-indistinguishable from the interface of its component
-filters. This allows a chained filter to be used wherever
-an atomic filter is accepted. In particular, chains can be
+A \emph{chain} transparently combines the effect of one or
+more filters. The interface of a chain is
+indistinguishable from the interface of its components.
+This allows a chained filter to be used wherever an atomic
+filter is expected. In particular, chains can be
themselves chained to create arbitrarily complex operations.
Filters can be seen as internal nodes in a network through
which data will flow, potentially being transformed many
-times along the way. Chains connect these nodes together.
-The initial and final nodes of the network are
-\emph{sources} and \emph{sinks}, respectively. Less
-abstractly, a source is a function that produces new data
-every time it is invoked. Conversely, sinks are functions
-that give a final destination to the data they receive.
-Naturally, sources and sinks can also be chained with
-filters to produce filtered sources and sinks.
+times along its way. Chains connect these nodes together.
+To complete the picture, we need \emph{sources} and
+\emph{sinks}. These are the initial and final nodes of the
+network, respectively. Less abstractly, a source is a
+function that produces new data every time it is called.
+Conversely, sinks are functions that give a final
+destination to the data they receive. Naturally, sources
+and sinks can also be chained with filters to produce
+filtered sources and sinks.
Finally, filters, chains, sources, and sinks are all passive
entities: they must be repeatedly invoked in order for
anything to happen. \emph{Pumps} provide the driving force
that pushes data through the network, from a source to a
-sink, and indirectly through all intervening filters.
+sink.
In the following sections, we start with a simplified
interface, which we later refine. The evolution we present
@@ -101,28 +99,27 @@ concepts within our application domain.
\subsection{A simple example}
-The end-of-line normalization of text is a good
+Let us use the end-of-line normalization of text as an
example to motivate our initial filter interface.
Assume we are given text in an unknown end-of-line
convention (including possibly mixed conventions) out of the
-commonly found Unix (\LF), Mac OS (\CR), and
-DOS (\CRLF) conventions. We would like to be able to
-use the folowing code to normalize the end-of-line markers:
+commonly found Unix (LF), Mac OS (CR), and DOS (CRLF)
+conventions. We would like to be able to write code like the
+following:
\begin{quote}
\begin{lua}
@stick#
-local CRLF = "\013\010"
-local input = source.chain(source.file(io.stdin), normalize(CRLF))
-local output = sink.file(io.stdout)
-pump.all(input, output)
+local in = source.chain(source.file(io.stdin), normalize("\r\n"))
+local out = sink.file(io.stdout)
+pump.all(in, out)
%
\end{lua}
\end{quote}
This program should read data from the standard input stream
-and normalize the end-of-line markers to the canonic
-\CRLF\ marker, as defined by the MIME standard.
-Finally, the normalized text should be sent to the standard output
+and normalize the end-of-line markers to the canonic CRLF
+marker, as defined by the MIME standard. Finally, the
+normalized text should be sent to the standard output
stream. We use a \emph{file source} that produces data from
standard input, and chain it with a filter that normalizes
the data. The pump then repeatedly obtains data from the
@@ -130,28 +127,27 @@ source, and passes it to the \emph{file sink}, which sends
it to the standard output.
In the code above, the \texttt{normalize} \emph{factory} is a
-function that creates our normalization filter, which
-replaces any end-of-line marker with the canonic marker.
-The initial filter interface is
+function that creates our normalization filter. This filter
+will replace any end-of-line marker with the canonic
+`\verb|\r\n|' marker. The initial filter interface is
trivial: a filter function receives a chunk of input data,
and returns a chunk of processed data. When there are no
more input data left, the caller notifies the filter by invoking
-it with a \nil\ chunk. The filter responds by returning
-the final chunk of processed data (which could of course be
-the empty string).
+it with a \texttt{nil} chunk. The filter responds by returning
+the final chunk of processed data.
Although the interface is extremely simple, the
implementation is not so obvious. A normalization filter
respecting this interface needs to keep some kind of context
between calls. This is because a chunk boundary may lie between
-the \CR\ and \LF\ characters marking the end of a single line. This
+the CR and LF characters marking the end of a line. This
need for contextual storage motivates the use of
factories: each time the factory is invoked, it returns a
filter with its own context so that we can have several
independent filters being used at the same time. For
efficiency reasons, we must avoid the obvious solution of
concatenating all the input into the context before
-producing any output chunks.
+producing any output.
To that end, we break the implementation into two parts:
a low-level filter, and a factory of high-level filters. The
@@ -171,10 +167,10 @@ end-of-line normalization filters:
\begin{quote}
\begin{lua}
@stick#
-function filter.cycle(lowlevel, context, extra)
+function filter.cycle(low, ctx, extra)
return function(chunk)
local ret
- ret, context = lowlevel(context, chunk, extra)
+ ret, ctx = low(ctx, chunk, extra)
return ret
end
end
@@ -182,30 +178,27 @@ end
@stick#
function normalize(marker)
- return filter.cycle(eol, 0, marker)
+ return cycle(eol, 0, marker)
end
%
\end{lua}
\end{quote}
The \texttt{normalize} factory simply calls a more generic
-factory, the \texttt{cycle}~factory, passing the low-level
-filter~\texttt{eol}. The \texttt{cycle}~factory receives a
+factory, the \texttt{cycle} factory. This factory receives a
low-level filter, an initial context, and an extra
parameter, and returns a new high-level filter. Each time
the high-level filer is passed a new chunk, it invokes the
low-level filter with the previous context, the new chunk,
and the extra argument. It is the low-level filter that
does all the work, producing the chunk of processed data and
-a new context. The high-level filter then replaces its
+a new context. The high-level filter then updates its
internal context, and returns the processed chunk of data to
the user. Notice that we take advantage of Lua's lexical
scoping to store the context in a closure between function
calls.
-\subsection{The C part of the filter}
-
-As for the low-level filter, we must first accept
+Concerning the low-level filter code, we must first accept
that there is no perfect solution to the end-of-line marker
normalization problem. The difficulty comes from an
inherent ambiguity in the definition of empty lines within
@@ -215,39 +208,39 @@ mixed input. It also does a reasonable job with empty lines
and serves as a good example of how to implement a low-level
filter.
-The idea is to consider both \CR\ and~\LF\ as end-of-line
+The idea is to consider both CR and~LF as end-of-line
\emph{candidates}. We issue a single break if any candidate
-is seen alone, or if it is followed by a different
-candidate. In other words, \CR~\CR~and \LF~\LF\ each issue
-two end-of-line markers, whereas \CR~\LF~and \LF~\CR\ issue
-only one marker each. It is easy to see that this method
-correctly handles the most common end-of-line conventions.
+is seen alone, or followed by a different candidate. In
+other words, CR~CR~and LF~LF each issue two end-of-line
+markers, whereas CR~LF~and LF~CR issue only one marker each.
+This method correctly handles the Unix, DOS/MIME, VMS, and Mac
+OS conventions.
-With this in mind, we divide the low-level filter into two
-simple functions. The inner function~\texttt{pushchar} performs the
-normalization itself. It takes each input character in turn,
-deciding what to output and how to modify the context. The
-context tells if the last processed character was an
-end-of-line candidate, and if so, which candidate it was.
-For efficiency, we use Lua's auxiliary library's buffer
-interface:
+\subsection{The C part of the filter}
+
+Our low-level filter is divided into two simple functions.
+The inner function performs the normalization itself. It takes
+each input character in turn, deciding what to output and
+how to modify the context. The context tells if the last
+processed character was an end-of-line candidate, and if so,
+which candidate it was. For efficiency, it uses
+Lua's auxiliary library's buffer interface:
\begin{quote}
\begin{C}
@stick#
@#define candidate(c) (c == CR || c == LF)
-static int pushchar(int c, int last, const char *marker,
+static int process(int c, int last, const char *marker,
luaL_Buffer *buffer) {
if (candidate(c)) {
if (candidate(last)) {
- if (c == last)
- luaL_addstring(buffer, marker);
+ if (c == last) luaL_addstring(buffer, marker);
return 0;
} else {
luaL_addstring(buffer, marker);
return c;
}
} else {
- luaL_pushchar(buffer, c);
+ luaL_putchar(buffer, c);
return 0;
}
}
@@ -255,20 +248,15 @@ static int pushchar(int c, int last, const char *marker,
\end{C}
\end{quote}
-The outer function~\texttt{eol} simply interfaces with Lua.
-It receives the context and input chunk (as well as an
-optional custom end-of-line marker), and returns the
-transformed output chunk and the new context.
-Notice that if the input chunk is \nil, the operation
-is considered to be finished. In that case, the loop will
-not execute a single time and the context is reset to the
-initial state. This allows the filter to be reused many
-times:
+The outer function simply interfaces with Lua. It receives the
+context and input chunk (as well as an optional
+custom end-of-line marker), and returns the transformed
+output chunk and the new context:
\begin{quote}
\begin{C}
@stick#
static int eol(lua_State *L) {
- int context = luaL_checkint(L, 1);
+ int ctx = luaL_checkint(L, 1);
size_t isize = 0;
const char *input = luaL_optlstring(L, 2, NULL, &isize);
const char *last = input + isize;
@@ -281,18 +269,24 @@ static int eol(lua_State *L) {
return 2;
}
while (input < last)
- context = pushchar(*input++, context, marker, &buffer);
+ ctx = process(*input++, ctx, marker, &buffer);
luaL_pushresult(&buffer);
- lua_pushnumber(L, context);
+ lua_pushnumber(L, ctx);
return 2;
}
%
\end{C}
\end{quote}
+Notice that if the input chunk is \texttt{nil}, the operation
+is considered to be finished. In that case, the loop will
+not execute a single time and the context is reset to the
+initial state. This allows the filter to be reused many
+times.
+
When designing your own filters, the challenging part is to
decide what will be in the context. For line breaking, for
-instance, it could be the number of bytes that still fit in the
+instance, it could be the number of bytes left in the
current line. For Base64 encoding, it could be a string
with the bytes that remain after the division of the input
into 3-byte atoms. The MIME module in the \texttt{LuaSocket}
@@ -300,22 +294,19 @@ distribution has many other examples.
\section{Filter chains}
-Chains greatly increase the power of filters. For example,
+Chains add a lot to the power of filters. For example,
according to the standard for Quoted-Printable encoding,
-text should be normalized to a canonic end-of-line marker
-prior to encoding. After encoding, the resulting text must
-be broken into lines of no more than 76 characters, with the
-use of soft line breaks (a line terminated by the \texttt{=}
-sign). To help specifying complex transformations like
-this, we define a chain factory that creates a composite
-filter from one or more filters. A chained filter passes
-data through all its components, and can be used wherever a
-primitive filter is accepted.
+text must be normalized to a canonic end-of-line marker
+prior to encoding. To help specifying complex
+transformations like this, we define a chain factory that
+creates a composite filter from one or more filters. A
+chained filter passes data through all its components, and
+can be used wherever a primitive filter is accepted.
The chaining factory is very simple. The auxiliary
function~\texttt{chainpair} chains two filters together,
taking special care if the chunk is the last. This is
-because the final \nil\ chunk notification has to be
+because the final \texttt{nil} chunk notification has to be
pushed through both filters in turn:
\begin{quote}
\begin{lua}
@@ -331,9 +322,9 @@ end
@stick#
function filter.chain(...)
- local f = select(1, ...)
- for i = 2, select('@#', ...) do
- f = chainpair(f, select(i, ...))
+ local f = arg[1]
+ for i = 2, @#arg do
+ f = chainpair(f, arg[i])
end
return f
end
@@ -346,11 +337,11 @@ define the Quoted-Printable conversion as such:
\begin{quote}
\begin{lua}
@stick#
-local qp = filter.chain(normalize(CRLF), encode("quoted-printable"),
- wrap("quoted-printable"))
-local input = source.chain(source.file(io.stdin), qp)
-local output = sink.file(io.stdout)
-pump.all(input, output)
+local qp = filter.chain(normalize("\r\n"),
+ encode("quoted-printable"))
+local in = source.chain(source.file(io.stdin), qp)
+local out = sink.file(io.stdout)
+pump.all(in, out)
%
\end{lua}
\end{quote}
@@ -369,14 +360,14 @@ gives a final destination to the data.
\subsection{Sources}
A source returns the next chunk of data each time it is
-invoked. When there is no more data, it simply returns~\nil.
-In the event of an error, the source can inform the
-caller by returning \nil\ followed by the error message.
+invoked. When there is no more data, it simply returns
+\texttt{nil}. In the event of an error, the source can inform the
+caller by returning \texttt{nil} followed by an error message.
Below are two simple source factories. The \texttt{empty} source
returns no data, possibly returning an associated error
-message. The \texttt{file} source yields the contents of a file
-in a chunk by chunk fashion:
+message. The \texttt{file} source works harder, and
+yields the contents of a file in a chunk by chunk fashion:
\begin{quote}
\begin{lua}
@stick#
@@ -407,7 +398,7 @@ A filtered source passes its data through the
associated filter before returning it to the caller.
Filtered sources are useful when working with
functions that get their input data from a source (such as
-the pumps in our examples). By chaining a source with one or
+the pump in our first example). By chaining a source with one or
more filters, the function can be transparently provided
with filtered data, with no need to change its interface.
Here is a factory that does the job:
@@ -415,18 +406,14 @@ Here is a factory that does the job:
\begin{lua}
@stick#
function source.chain(src, f)
- return function()
- if not src then
- return nil
- end
+ return source.simplify(function()
+ if not src then return nil end
local chunk, err = src()
if not chunk then
src = nil
return f(nil)
- else
- return f(chunk)
- end
- end
+ else return f(chunk) end
+ end)
end
%
\end{lua}
@@ -434,20 +421,20 @@ end
\subsection{Sinks}
-Just as we defined an interface for source of data,
+Just as we defined an interface a data source,
we can also define an interface for a data destination.
We call any function respecting this
interface a \emph{sink}. In our first example, we used a
file sink connected to the standard output.
Sinks receive consecutive chunks of data, until the end of
-data is signaled by a \nil\ input chunk. A sink can be
+data is signaled by a \texttt{nil} chunk. A sink can be
notified of an error with an optional extra argument that
-contains the error message, following a \nil\ chunk.
+contains the error message, following a \texttt{nil} chunk.
If a sink detects an error itself, and
-wishes not to be called again, it can return \nil,
+wishes not to be called again, it can return \texttt{nil},
followed by an error message. A return value that
-is not \nil\ means the sink will accept more data.
+is not \texttt{nil} means the source will accept more data.
Below are two useful sink factories.
The table factory creates a sink that stores
@@ -482,7 +469,7 @@ end
Naturally, filtered sinks are just as useful as filtered
sources. A filtered sink passes each chunk it receives
-through the associated filter before handing it down to the
+through the associated filter before handing it to the
original sink. In the following example, we use a source
that reads from the standard input. The input chunks are
sent to a table sink, which has been coupled with a
@@ -492,10 +479,10 @@ standard out:
\begin{quote}
\begin{lua}
@stick#
-local input = source.file(io.stdin)
-local output, t = sink.table()
-output = sink.chain(normalize(CRLF), output)
-pump.all(input, output)
+local in = source.file(io.stdin)
+local out, t = sink.table()
+out = sink.chain(normalize("\r\n"), out)
+pump.all(in, out)
io.write(table.concat(t))
%
\end{lua}
@@ -503,11 +490,11 @@ io.write(table.concat(t))
\subsection{Pumps}
-Although not on purpose, our interface for sources is
-compatible with Lua iterators. That is, a source can be
-neatly used in conjunction with \texttt{for} loops. Using
-our file source as an iterator, we can write the following
-code:
+Adrian Sietsma noticed that, although not on purpose, our
+interface for sources is compatible with Lua iterators.
+That is, a source can be neatly used in conjunction
+with \texttt{for} loops. Using our file
+source as an iterator, we can write the following code:
\begin{quote}
\begin{lua}
@stick#
@@ -552,22 +539,20 @@ end
The \texttt{pump.step} function moves one chunk of data from
the source to the sink. The \texttt{pump.all} function takes
an optional \texttt{step} function and uses it to pump all the
-data from the source to the sink.
-Here is an example that uses the Base64 and the
-line wrapping filters from the \texttt{LuaSocket}
-distribution. The program reads a binary file from
+data from the source to the sink. We can now use everything
+we have to write a program that reads a binary file from
disk and stores it in another file, after encoding it to the
Base64 transfer content encoding:
\begin{quote}
\begin{lua}
@stick#
-local input = source.chain(
+local in = source.chain(
source.file(io.open("input.bin", "rb")),
encode("base64"))
-local output = sink.chain(
+local out = sink.chain(
wrap(76),
sink.file(io.open("output.b64", "w")))
-pump.all(input, output)
+pump.all(in, out)
%
\end{lua}
\end{quote}
@@ -576,17 +561,19 @@ The way we split the filters here is not intuitive, on
purpose. Alternatively, we could have chained the Base64
encode filter and the line-wrap filter together, and then
chain the resulting filter with either the file source or
-the file sink. It doesn't really matter.
+the file sink. It doesn't really matter. The Base64 and the
+line wrapping filters are part of the \texttt{LuaSocket}
+distribution.
\section{Exploding filters}
-Our current filter interface has one serious shortcoming.
-Consider for example a \texttt{gzip} decompression filter.
-During decompression, a small input chunk can be exploded
-into a huge amount of data. To address this problem, we
-decided to change the filter interface and allow exploding
-filters to return large quantities of output data in a chunk
-by chunk manner.
+Our current filter interface has one flagrant shortcoming.
+When David Burgess was writing his \texttt{gzip} filter, he
+noticed that a decompression filter can explode a small
+input chunk into a huge amount of data. To address this
+problem, we decided to change the filter interface and allow
+exploding filters to return large quantities of output data
+in a chunk by chunk manner.
More specifically, after passing each chunk of input to
a filter, and collecting the first chunk of output, the
@@ -595,11 +582,11 @@ filtered data is left. Within these secondary calls, the
caller passes an empty string to the filter. The filter
responds with an empty string when it is ready for the next
input chunk. In the end, after the user passes a
-\nil\ chunk notifying the filter that there is no
+\texttt{nil} chunk notifying the filter that there is no
more input data, the filter might still have to produce too
much output data to return in a single chunk. The user has
-to loop again, now passing \nil\ to the filter each time,
-until the filter itself returns \nil\ to notify the
+to loop again, now passing \texttt{nil} to the filter each time,
+until the filter itself returns \texttt{nil} to notify the
user it is finally done.
Fortunately, it is very easy to modify a filter to respect
@@ -612,13 +599,13 @@ Interestingly, the modifications do not have a measurable
negative impact in the performance of filters that do
not need the added flexibility. On the other hand, for a
small price in complexity, the changes make exploding
-filters practical.
+filters practical.
\section{A complex example}
The LTN12 module in the \texttt{LuaSocket} distribution
-implements all the ideas we have described. The MIME
-and SMTP modules are tightly integrated with LTN12,
+implements the ideas we have described. The MIME
+and SMTP modules are especially integrated with LTN12,
and can be used to showcase the expressive power of filters,
sources, sinks, and pumps. Below is an example
of how a user would proceed to define and send a
@@ -635,9 +622,9 @@ local message = smtp.message{
to = "Fulano ",
subject = "A message with an attachment"},
body = {
- preamble = "Hope you can see the attachment" .. CRLF,
+ preamble = "Hope you can see the attachment\r\n",
[1] = {
- body = "Here is our logo" .. CRLF},
+ body = "Here is our logo\r\n"},
[2] = {
headers = {
["content-type"] = 'image/png; name="luasocket.png"',
@@ -678,18 +665,6 @@ abstraction for final data destinations. Filters define an
interface for data transformations. The chaining of
filters, sources and sinks provides an elegant way to create
arbitrarily complex data transformations from simpler
-components. Pumps simply push the data through.
-
-\section{Acknowledgements}
-
-The concepts described in this text are the result of long
-discussions with David Burgess. A version of this text has
-been released on-line as the Lua Technical Note 012, hence
-the name of the corresponding LuaSocket module,
-\texttt{ltn12}. Wim Couwenberg contributed to the
-implementation of the module, and Adrian Sietsma was the
-first to notice the correspondence between sources and Lua
-iterators.
-
+components. Pumps simply move the data through.
\end{document}
diff --git a/gem/makefile b/gem/makefile
index d2f0c93..a4287c2 100644
--- a/gem/makefile
+++ b/gem/makefile
@@ -12,12 +12,3 @@ clean:
pdf: ltn012.pdf
open ltn012.pdf
-
-test: gem.so
-
-
-gem.o: gem.c
- gcc -c -o gem.o -Wall -ansi -W -O2 gem.c
-
-gem.so: gem.o
- export MACOSX_DEPLOYMENT_TARGET="10.3"; gcc -bundle -undefined dynamic_lookup -o gem.so gem.o
From e3e0dee639309b0496b3955051269fddbd397675 Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Fri, 17 Jun 2011 13:45:52 -0700
Subject: [PATCH 02/23] ignore build output
---
.gitignore | 3 +++
1 file changed, 3 insertions(+)
create mode 100644 .gitignore
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..705ce5b
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,3 @@
+*.o
+*.so
+*.so.*
From dcb92d62681be91686b4b5211dbd419b95d81e06 Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Fri, 17 Jun 2011 14:36:20 -0700
Subject: [PATCH 03/23] Support the conventional DESTDIR and prefix variables
Many packaging systems rely on them, they are described here: -
http://www.gnu.org/prep/standards/standards.html#index-prefix -
http://www.gnu.org/prep/standards/standards.html#DESTDIR
---
config | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/config b/config
index 49958eb..62e52ef 100644
--- a/config
+++ b/config
@@ -27,11 +27,15 @@ UNIX_SO=unix.$(EXT)
#------
# Top of your Lua installation
# Relative paths will be inside the src tree
-#
+
+DESTDIR=
+prefix=/usr/local
+top=$(DESTDIR)$(prefix)
+
#INSTALL_TOP_SHARE=/usr/local/share/lua/5.0
#INSTALL_TOP_LIB=/usr/local/lib/lua/5.0
-INSTALL_TOP_SHARE=/usr/local/share/lua/5.1
-INSTALL_TOP_LIB=/usr/local/lib/lua/5.1
+INSTALL_TOP_SHARE=$(top)/share/lua/5.1
+INSTALL_TOP_LIB=$(top)/lib/lua/5.1
INSTALL_DATA=cp
INSTALL_EXEC=cp
From 826589afcd17c7d7cffaae7613e1201b9777742c Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Mon, 8 Aug 2011 17:12:34 -0700
Subject: [PATCH 04/23] Add location of Ubuntu's lua5.1 headers to config.
---
config | 1 +
1 file changed, 1 insertion(+)
diff --git a/config b/config
index 62e52ef..bef8d72 100644
--- a/config
+++ b/config
@@ -17,6 +17,7 @@ UNIX_SO=unix.$(EXT)
#
#LUAINC=-I/usr/local/include/lua50
#LUAINC=-I/usr/local/include/lua5.1
+LUAINC=-I/usr/include/lua5.1
#LUAINC=-Ilua-5.1.1/src
#------
From 1f704cfb89324fd7b7cc6f92ea7fa66c7a46846c Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Tue, 27 Sep 2011 12:54:51 -0700
Subject: [PATCH 05/23] Add all-unix and install-unix targets which include all
modules supported on unix. Besides standard socket and mime modules, this
includes unix domain socket support.
---
makefile | 5 ++++-
src/makefile | 2 ++
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/makefile b/makefile
index 6d70039..97a72e9 100644
--- a/makefile
+++ b/makefile
@@ -11,7 +11,7 @@ INSTALL_SOCKET_LIB=$(INSTALL_TOP_LIB)/socket
INSTALL_MIME_SHARE=$(INSTALL_TOP_SHARE)/mime
INSTALL_MIME_LIB=$(INSTALL_TOP_LIB)/mime
-all clean:
+all clean all-unix:
cd src; $(MAKE) $@
#------
@@ -46,6 +46,9 @@ install: all
cd src; mkdir -p $(INSTALL_MIME_LIB)
cd src; $(INSTALL_EXEC) $(MIME_SO) $(INSTALL_MIME_LIB)/core.$(EXT)
+install-unix: install all-unix
+ cd src; $(INSTALL_EXEC) $(UNIX_SO) $(INSTALL_SOCKET_LIB)/$(UNIX_SO)
+
#------
# End of makefile
#
diff --git a/src/makefile b/src/makefile
index b614f77..6ec8718 100644
--- a/src/makefile
+++ b/src/makefile
@@ -55,6 +55,8 @@ $(SOCKET_SO): $(SOCKET_OBJS)
$(MIME_SO): $(MIME_OBJS)
$(LD) $(LDFLAGS) -o $@ $(MIME_OBJS)
+all-unix: all $(UNIX_SO)
+
$(UNIX_SO): $(UNIX_OBJS)
$(LD) $(LDFLAGS) -o $@ $(UNIX_OBJS)
From a8b19e5367738f606a051f254858dc09de2a695a Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Tue, 27 Sep 2011 12:26:38 -0700
Subject: [PATCH 06/23] OS X CFLAGS definition caused silent failure to build
debug version of luasocket. The luasocket tests require LUASOCKET_DEBUG to be
defined at build time, but for OS X if COMPAT was undefined, the command line
looked like ... -I -DLUASOCKET_DEBUG ... so that the the macro definition
was silently being treated as the argument to -I. Result is the macro was
never set, and tests would never run. Fixed by moving -I to the (optional)
definition of the location of compat headers.
---
config | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/config b/config
index bef8d72..d6085ad 100644
--- a/config
+++ b/config
@@ -23,7 +23,7 @@ LUAINC=-I/usr/include/lua5.1
#------
# Compat-5.1 directory
#
-#COMPAT=compat-5.1r5
+#COMPAT=-Icompat-5.1r5
#------
# Top of your Lua installation
@@ -47,7 +47,7 @@ INSTALL_EXEC=cp
#
#CC=gcc
#DEF= -DLUASOCKET_DEBUG -DUNIX_HAS_SUN_LEN
-#CFLAGS= $(LUAINC) -I$(COMPAT) $(DEF) -pedantic -Wall -O2 -fno-common
+#CFLAGS= $(LUAINC) $(COMPAT) $(DEF) -pedantic -Wall -O2 -fno-common
#LDFLAGS=-bundle -undefined dynamic_lookup
#LD=export MACOSX_DEPLOYMENT_TARGET="10.3"; gcc
@@ -56,7 +56,7 @@ INSTALL_EXEC=cp
# for Linux
CC=gcc
DEF=-DLUASOCKET_DEBUG
-CFLAGS= $(LUAINC) $(DEF) -pedantic -Wall -O2 -fpic
+CFLAGS= $(LUAINC) $(COMPAT) $(DEF) -pedantic -Wall -O2 -fpic
LDFLAGS=-O -shared -fpic
LD=gcc
From 51acb54760dc91095d59839e8ea2256557f42781 Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Fri, 17 Jun 2011 13:51:34 -0700
Subject: [PATCH 07/23] Stop returning an error after successful send of zero
length UDP packets A zero-length send is invalid with TCP, but well defined
with UDP. udp:send"" was returning (nil,"refused"), indicating that it failed
when the packet was actually sent. The test script reproduces the bug, and
includes a tcpdump of the zero length packet being sent.
---
src/usocket.c | 11 +++++------
test/udp-zero-length-send | 25 +++++++++++++++++++++++++
2 files changed, 30 insertions(+), 6 deletions(-)
create mode 100755 test/udp-zero-length-send
diff --git a/src/usocket.c b/src/usocket.c
index ef275b4..97f8b4f 100644
--- a/src/usocket.c
+++ b/src/usocket.c
@@ -213,14 +213,13 @@ int socket_send(p_socket ps, const char *data, size_t count,
for ( ;; ) {
long put = (long) send(*ps, data, count, 0);
/* if we sent anything, we are done */
- if (put > 0) {
+ if (put >= 0) {
*sent = put;
return IO_DONE;
}
err = errno;
- /* send can't really return 0, but EPIPE means the connection was
- closed */
- if (put == 0 || err == EPIPE) return IO_CLOSED;
+ /* EPIPE means the connection was closed */
+ if (err == EPIPE) return IO_CLOSED;
/* we call was interrupted, just try again */
if (err == EINTR) continue;
/* if failed fatal reason, report error */
@@ -243,12 +242,12 @@ int socket_sendto(p_socket ps, const char *data, size_t count, size_t *sent,
if (*ps == SOCKET_INVALID) return IO_CLOSED;
for ( ;; ) {
long put = (long) sendto(*ps, data, count, 0, addr, len);
- if (put > 0) {
+ if (put >= 0) {
*sent = put;
return IO_DONE;
}
err = errno;
- if (put == 0 || err == EPIPE) return IO_CLOSED;
+ if (err == EPIPE) return IO_CLOSED;
if (err == EINTR) continue;
if (err != EAGAIN) return err;
if ((err = socket_waitfd(ps, WAITFD_W, tm)) != IO_DONE) return err;
diff --git a/test/udp-zero-length-send b/test/udp-zero-length-send
new file mode 100755
index 0000000..a594944
--- /dev/null
+++ b/test/udp-zero-length-send
@@ -0,0 +1,25 @@
+#!/usr/bin/lua
+
+--[[
+Show that luasocket returns an error message on zero-length UDP sends,
+even though the send is valid, and in fact the UDP packet is sent
+to the peer:
+
+% sudo tcpdump -i lo -n
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes
+13:40:16.652808 IP 127.0.0.1.56573 > 127.0.0.1.5432: UDP, length 0
+
+]]
+
+require"socket"
+
+s = assert(socket.udp())
+r = assert(socket.udp())
+assert(r:setsockname("*", 5432))
+assert(s:setpeername("127.0.0.1", 5432))
+
+ssz, emsg = s:send("")
+
+print(ssz == 0 and "OK" or "FAIL",[[send:("")]], ssz, emsg)
+
From c37f71d062379ff4f48658cddd724b94df20fb66 Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Mon, 8 Aug 2011 16:49:20 -0700
Subject: [PATCH 08/23] Test showing failure to receive a zero-length packet.
---
test/udp-zero-length-send-recv | 35 ++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
create mode 100755 test/udp-zero-length-send-recv
diff --git a/test/udp-zero-length-send-recv b/test/udp-zero-length-send-recv
new file mode 100755
index 0000000..7d76c98
--- /dev/null
+++ b/test/udp-zero-length-send-recv
@@ -0,0 +1,35 @@
+#!/usr/bin/lua
+
+--[[
+Show that luasocket returns an error message on zero-length UDP sends,
+even though the send is valid, and in fact the UDP packet is sent
+to the peer:
+
+% sudo tcpdump -i lo -n
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes
+13:40:16.652808 IP 127.0.0.1.56573 > 127.0.0.1.5432: UDP, length 0
+
+]]
+
+require"socket"
+
+s = assert(socket.udp())
+r = assert(socket.udp())
+assert(r:setsockname("*", 5432))
+assert(s:setpeername("127.0.0.1", 5432))
+
+ok, emsg = s:send("")
+if ok ~= 0 then
+ print("send of zero failed with:", ok, emsg)
+end
+
+ok, emsg = r:receive()
+
+if not ok or string.len(ok) ~= 0 then
+ print("receive of zero failed with:", ok, emsg)
+end
+
+print"ok"
+
+
From 21698c7665ee1cb43e1b83c3ea5cf4dbf827c1df Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Mon, 8 Aug 2011 16:11:47 -0700
Subject: [PATCH 09/23] Receive of zero for UDP is now possible. Previously,
receive of zero was considered to be "closed", but that is only true for
stream-based protocols, like TCP.
---
src/udp.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/src/udp.c b/src/udp.c
index e604bea..8168bf2 100644
--- a/src/udp.c
+++ b/src/udp.c
@@ -170,6 +170,9 @@ static int meth_receive(lua_State *L) {
count = MIN(count, sizeof(buffer));
timeout_markstart(tm);
err = socket_recv(&udp->sock, buffer, count, &got, tm);
+ /* Unlike TCP, recv() of zero is not closed, but a zero-length packet. */
+ if (err == IO_CLOSED)
+ err = IO_DONE;
if (err != IO_DONE) {
lua_pushnil(L);
lua_pushstring(L, udp_strerror(err));
@@ -194,6 +197,9 @@ static int meth_receivefrom(lua_State *L) {
count = MIN(count, sizeof(buffer));
err = socket_recvfrom(&udp->sock, buffer, count, &got,
(SA *) &addr, &addr_len, tm);
+ /* Unlike TCP, recv() of zero is not closed, but a zero-length packet. */
+ if (err == IO_CLOSED)
+ err = IO_DONE;
if (err == IO_DONE) {
lua_pushlstring(L, buffer, got);
lua_pushstring(L, inet_ntoa(addr.sin_addr));
From f63d616bc048fe256181ff5e7e4aaca11afe3237 Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Mon, 27 Jun 2011 17:04:32 -0700
Subject: [PATCH 10/23] Use poll by default for socket.connect(), instead of
select(). Connect timeouts are implemented by waiting on the new socket
descriptor. When select() is used for this, it imposes an arbitrary limit on
the number of connections that can be made, usually 1024-3. Using poll()
removes this limit on the number of simultaneous TCP connections can be made
using luasocket. The previous default implementation using select() is
available by defining SOCKET_SELECT. Note that using socket.select() always
uses select(), so it isn't possible to wait on an arbitrary number of
connections at once.
---
src/usocket.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/usocket.c b/src/usocket.c
index 97f8b4f..e43cfa4 100644
--- a/src/usocket.c
+++ b/src/usocket.c
@@ -16,7 +16,7 @@
/*-------------------------------------------------------------------------*\
* Wait for readable/writable/connected socket with timeout
\*-------------------------------------------------------------------------*/
-#ifdef SOCKET_POLL
+#ifndef SOCKET_SELECT
#include
#define WAITFD_R POLLIN
From dace50628c5acc0aad94538eb6d3bd31e055d941 Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Tue, 28 Jun 2011 17:48:21 -0700
Subject: [PATCH 11/23] Utility to find how many TCP connections can be made.
---
test/find-connect-limit | 32 ++++++++++++++++++++++++++++++++
1 file changed, 32 insertions(+)
create mode 100755 test/find-connect-limit
diff --git a/test/find-connect-limit b/test/find-connect-limit
new file mode 100755
index 0000000..ad0c3f5
--- /dev/null
+++ b/test/find-connect-limit
@@ -0,0 +1,32 @@
+#!/usr/bin/env lua
+--[[
+Find out how many TCP connections we can make.
+
+Use ulimit to increase the max number of descriptors:
+
+ulimit -n 10000
+ulimit -n
+
+You'll probably need to be root to do this.
+]]
+
+require "socket"
+
+host = arg[1] or "google.com"
+port = arg[2] or 80
+
+connections = {}
+
+repeat
+ c = assert(socket.connect(hostip or host, 80))
+ table.insert(connections, c)
+
+ if not hostip then
+ hostip = c:getpeername()
+ print("resolved", host, "to", hostip)
+ end
+
+ print("connection #", #connections, c, "fd", c:getfd())
+
+until false
+
From 3b19f2a7edbcde798a9cf5f1f6175d360e891744 Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Mon, 8 Aug 2011 16:23:06 -0700
Subject: [PATCH 12/23] testsrvr asserts when test finishes successfully. When
the test client finishes, the test server asserts with a "closed" message.
After looking carefully at this, I think the tests are running successfully
and passing. Since it appears to be a test failure, I modified the server to
allow the client to close the control connection.
---
test/testsrvr.lua | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/test/testsrvr.lua b/test/testsrvr.lua
index f1972c2..4be4069 100644
--- a/test/testsrvr.lua
+++ b/test/testsrvr.lua
@@ -7,7 +7,12 @@ while 1 do
print("server: waiting for client connection...");
control = assert(server:accept());
while 1 do
- command = assert(control:receive());
+ command, emsg = control:receive();
+ if emsg == "closed" then
+ control:close()
+ break
+ end
+ assert(command, emsg)
assert(control:send(ack));
print(command);
(loadstring(command))();
From b1f7c349b5714ebe304f93e43576a0ff3f721fc1 Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Thu, 23 Feb 2012 17:12:37 -0800
Subject: [PATCH 13/23] Add support for serial devices as socket streams on
unix.
---
config | 1 +
makefile | 1 +
src/makefile | 18 ++++-
src/serial.c | 183 ++++++++++++++++++++++++++++++++++++++++++++++++++
src/socket.h | 3 +
src/usocket.c | 60 +++++++++++++++++
6 files changed, 265 insertions(+), 1 deletion(-)
create mode 100644 src/serial.c
diff --git a/config b/config
index d6085ad..7d73638 100644
--- a/config
+++ b/config
@@ -11,6 +11,7 @@ MIME_V=1.0.2
SOCKET_SO=socket.$(EXT).$(SOCKET_V)
MIME_SO=mime.$(EXT).$(MIME_V)
UNIX_SO=unix.$(EXT)
+SERIAL_SO=serial.$(EXT)
#------
# Lua includes and libraries
diff --git a/makefile b/makefile
index 97a72e9..b1c9f18 100644
--- a/makefile
+++ b/makefile
@@ -48,6 +48,7 @@ install: all
install-unix: install all-unix
cd src; $(INSTALL_EXEC) $(UNIX_SO) $(INSTALL_SOCKET_LIB)/$(UNIX_SO)
+ cd src; $(INSTALL_EXEC) $(SERIAL_SO) $(INSTALL_SOCKET_LIB)/$(SERIAL_SO)
#------
# End of makefile
diff --git a/src/makefile b/src/makefile
index 6ec8718..c5c22f2 100644
--- a/src/makefile
+++ b/src/makefile
@@ -47,6 +47,17 @@ UNIX_OBJS:=\
usocket.o \
unix.o
+#------
+# Modules belonging to serial (device streams)
+#
+SERIAL_OBJS:=\
+ buffer.o \
+ auxiliar.o \
+ timeout.o \
+ io.o \
+ usocket.o \
+ serial.o
+
all: $(SOCKET_SO) $(MIME_SO)
$(SOCKET_SO): $(SOCKET_OBJS)
@@ -55,11 +66,14 @@ $(SOCKET_SO): $(SOCKET_OBJS)
$(MIME_SO): $(MIME_OBJS)
$(LD) $(LDFLAGS) -o $@ $(MIME_OBJS)
-all-unix: all $(UNIX_SO)
+all-unix: all $(UNIX_SO) $(SERIAL_SO)
$(UNIX_SO): $(UNIX_OBJS)
$(LD) $(LDFLAGS) -o $@ $(UNIX_OBJS)
+$(SERIAL_SO): $(SERIAL_OBJS)
+ $(LD) $(LDFLAGS) -o $@ $(SERIAL_OBJS)
+
#------
# List of dependencies
#
@@ -74,6 +88,8 @@ mime.o: mime.c mime.h
options.o: options.c auxiliar.h options.h socket.h io.h timeout.h \
usocket.h inet.h
select.o: select.c socket.h io.h timeout.h usocket.h select.h
+serial.o: serial.c auxiliar.h socket.h io.h timeout.h usocket.h \
+ unix.h buffer.h
tcp.o: tcp.c auxiliar.h socket.h io.h timeout.h usocket.h inet.h \
options.h tcp.h buffer.h
timeout.o: timeout.c auxiliar.h timeout.h
diff --git a/src/serial.c b/src/serial.c
new file mode 100644
index 0000000..b356a3a
--- /dev/null
+++ b/src/serial.c
@@ -0,0 +1,183 @@
+/*=========================================================================*\
+* Serial stream
+* LuaSocket toolkit
+\*=========================================================================*/
+#include
+
+#include "lua.h"
+#include "lauxlib.h"
+
+#include "auxiliar.h"
+#include "socket.h"
+#include "options.h"
+#include "unix.h"
+#include
+
+/*
+Reuses userdata definition from unix.h, since it is useful for all
+stream-like objects.
+
+If we stored the serial path for use in error messages or userdata
+printing, we might need our own userdata definition.
+
+Group usage is semi-inherited from unix.c, but unnecessary since we
+have only one object type.
+*/
+
+/*=========================================================================*\
+* Internal function prototypes
+\*=========================================================================*/
+static int global_create(lua_State *L);
+static int meth_send(lua_State *L);
+static int meth_receive(lua_State *L);
+static int meth_close(lua_State *L);
+static int meth_settimeout(lua_State *L);
+static int meth_getfd(lua_State *L);
+static int meth_setfd(lua_State *L);
+static int meth_dirty(lua_State *L);
+static int meth_getstats(lua_State *L);
+static int meth_setstats(lua_State *L);
+
+/* serial object methods */
+static luaL_reg un[] = {
+ {"__gc", meth_close},
+ {"__tostring", auxiliar_tostring},
+ {"close", meth_close},
+ {"dirty", meth_dirty},
+ {"getfd", meth_getfd},
+ {"getstats", meth_getstats},
+ {"setstats", meth_setstats},
+ {"receive", meth_receive},
+ {"send", meth_send},
+ {"setfd", meth_setfd},
+ {"settimeout", meth_settimeout},
+ {NULL, NULL}
+};
+
+/* our socket creation function */
+static luaL_reg func[] = {
+ {"serial", global_create},
+ {NULL, NULL}
+};
+
+
+/*-------------------------------------------------------------------------*\
+* Initializes module
+\*-------------------------------------------------------------------------*/
+int luaopen_socket_serial(lua_State *L) {
+ /* create classes */
+ auxiliar_newclass(L, "serial{client}", un);
+ /* create class groups */
+ auxiliar_add2group(L, "serial{client}", "serial{any}");
+ /* make sure the function ends up in the package table */
+ luaL_openlib(L, "socket", func, 0);
+ /* return the function instead of the 'socket' table */
+ lua_pushstring(L, "serial");
+ lua_gettable(L, -2);
+ return 1;
+}
+
+/*=========================================================================*\
+* Lua methods
+\*=========================================================================*/
+/*-------------------------------------------------------------------------*\
+* Just call buffered IO methods
+\*-------------------------------------------------------------------------*/
+static int meth_send(lua_State *L) {
+ p_unix un = (p_unix) auxiliar_checkclass(L, "serial{client}", 1);
+ return buffer_meth_send(L, &un->buf);
+}
+
+static int meth_receive(lua_State *L) {
+ p_unix un = (p_unix) auxiliar_checkclass(L, "serial{client}", 1);
+ return buffer_meth_receive(L, &un->buf);
+}
+
+static int meth_getstats(lua_State *L) {
+ p_unix un = (p_unix) auxiliar_checkclass(L, "serial{client}", 1);
+ return buffer_meth_getstats(L, &un->buf);
+}
+
+static int meth_setstats(lua_State *L) {
+ p_unix un = (p_unix) auxiliar_checkclass(L, "serial{client}", 1);
+ return buffer_meth_setstats(L, &un->buf);
+}
+
+/*-------------------------------------------------------------------------*\
+* Select support methods
+\*-------------------------------------------------------------------------*/
+static int meth_getfd(lua_State *L) {
+ p_unix un = (p_unix) auxiliar_checkgroup(L, "serial{any}", 1);
+ lua_pushnumber(L, (int) un->sock);
+ return 1;
+}
+
+/* this is very dangerous, but can be handy for those that are brave enough */
+static int meth_setfd(lua_State *L) {
+ p_unix un = (p_unix) auxiliar_checkgroup(L, "serial{any}", 1);
+ un->sock = (t_socket) luaL_checknumber(L, 2);
+ return 0;
+}
+
+static int meth_dirty(lua_State *L) {
+ p_unix un = (p_unix) auxiliar_checkgroup(L, "serial{any}", 1);
+ lua_pushboolean(L, !buffer_isempty(&un->buf));
+ return 1;
+}
+
+/*-------------------------------------------------------------------------*\
+* Closes socket used by object
+\*-------------------------------------------------------------------------*/
+static int meth_close(lua_State *L)
+{
+ p_unix un = (p_unix) auxiliar_checkgroup(L, "serial{any}", 1);
+ socket_destroy(&un->sock);
+ lua_pushnumber(L, 1);
+ return 1;
+}
+
+
+/*-------------------------------------------------------------------------*\
+* Just call tm methods
+\*-------------------------------------------------------------------------*/
+static int meth_settimeout(lua_State *L) {
+ p_unix un = (p_unix) auxiliar_checkgroup(L, "serial{any}", 1);
+ return timeout_meth_settimeout(L, &un->tm);
+}
+
+/*=========================================================================*\
+* Library functions
+\*=========================================================================*/
+
+
+/*-------------------------------------------------------------------------*\
+* Creates a serial object
+\*-------------------------------------------------------------------------*/
+static int global_create(lua_State *L) {
+ const char* path = luaL_checkstring(L, 1);
+
+ /* allocate unix object */
+ p_unix un = (p_unix) lua_newuserdata(L, sizeof(t_unix));
+
+ /* open serial device */
+ t_socket sock = open(path, O_NOCTTY|O_RDWR);
+
+ /*printf("open %s on %d\n", path, sock);*/
+
+ if (sock < 0) {
+ lua_pushnil(L);
+ lua_pushstring(L, socket_strerror(errno));
+ lua_pushnumber(L, errno);
+ return 3;
+ }
+ /* set its type as client object */
+ auxiliar_setclass(L, "serial{client}", -1);
+ /* initialize remaining structure fields */
+ socket_setnonblocking(&sock);
+ un->sock = sock;
+ io_init(&un->io, (p_send) socket_write, (p_recv) socket_read,
+ (p_error) socket_ioerror, &un->sock);
+ timeout_init(&un->tm, -1, -1);
+ buffer_init(&un->buf, &un->io, &un->tm);
+ return 1;
+}
diff --git a/src/socket.h b/src/socket.h
index de5d79f..debb13a 100644
--- a/src/socket.h
+++ b/src/socket.h
@@ -68,6 +68,9 @@ const char *socket_strerror(int err);
int socket_send(p_socket ps, const char *data, size_t count,
size_t *sent, p_timeout tm);
int socket_recv(p_socket ps, char *data, size_t count, size_t *got, p_timeout tm);
+int socket_write(p_socket ps, const char *data, size_t count,
+ size_t *sent, p_timeout tm);
+int socket_read(p_socket ps, char *data, size_t count, size_t *got, p_timeout tm);
const char *socket_ioerror(p_socket ps, int err);
int socket_gethostbyaddr(const char *addr, socklen_t len, struct hostent **hp);
diff --git a/src/usocket.c b/src/usocket.c
index e43cfa4..46087c6 100644
--- a/src/usocket.c
+++ b/src/usocket.c
@@ -300,6 +300,66 @@ int socket_recvfrom(p_socket ps, char *data, size_t count, size_t *got,
return IO_UNKNOWN;
}
+
+/*-------------------------------------------------------------------------*\
+* Write with timeout
+*
+* socket_read and socket_write are cut-n-paste of socket_send and socket_recv,
+* with send/recv replaced with write/read. We can't just use write/read
+* in the socket version, because behaviour when size is zero is different.
+\*-------------------------------------------------------------------------*/
+int socket_write(p_socket ps, const char *data, size_t count,
+ size_t *sent, p_timeout tm)
+{
+ int err;
+ *sent = 0;
+ /* avoid making system calls on closed sockets */
+ if (*ps == SOCKET_INVALID) return IO_CLOSED;
+ /* loop until we send something or we give up on error */
+ for ( ;; ) {
+ long put = (long) write(*ps, data, count);
+ /* if we sent anything, we are done */
+ if (put >= 0) {
+ *sent = put;
+ return IO_DONE;
+ }
+ err = errno;
+ /* EPIPE means the connection was closed */
+ if (err == EPIPE) return IO_CLOSED;
+ /* we call was interrupted, just try again */
+ if (err == EINTR) continue;
+ /* if failed fatal reason, report error */
+ if (err != EAGAIN) return err;
+ /* wait until we can send something or we timeout */
+ if ((err = socket_waitfd(ps, WAITFD_W, tm)) != IO_DONE) return err;
+ }
+ /* can't reach here */
+ return IO_UNKNOWN;
+}
+
+/*-------------------------------------------------------------------------*\
+* Read with timeout
+* See note for socket_write
+\*-------------------------------------------------------------------------*/
+int socket_read(p_socket ps, char *data, size_t count, size_t *got, p_timeout tm) {
+ int err;
+ *got = 0;
+ if (*ps == SOCKET_INVALID) return IO_CLOSED;
+ for ( ;; ) {
+ long taken = (long) read(*ps, data, count);
+ if (taken > 0) {
+ *got = taken;
+ return IO_DONE;
+ }
+ err = errno;
+ if (taken == 0) return IO_CLOSED;
+ if (err == EINTR) continue;
+ if (err != EAGAIN) return err;
+ if ((err = socket_waitfd(ps, WAITFD_R, tm)) != IO_DONE) return err;
+ }
+ return IO_UNKNOWN;
+}
+
/*-------------------------------------------------------------------------*\
* Put socket into blocking mode
\*-------------------------------------------------------------------------*/
From 3a0fd4744daa972e59579a753af2da9dbde36edd Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Thu, 6 Oct 2011 11:52:52 -0700
Subject: [PATCH 14/23] Reference index was missing documented APIs, and only
partially alphabetized.
---
doc/reference.html | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/doc/reference.html b/doc/reference.html
index b329f57..f13a6bd 100644
--- a/doc/reference.html
+++ b/doc/reference.html
@@ -42,9 +42,9 @@ Support, Manual">
DNS (in socket)
-toip,
+gethostname,
tohostname,
-gethostname.
+toip.
@@ -108,9 +108,9 @@ Support, Manual">
MIME
high-level:
-normalize,
decode,
encode,
+normalize,
stuff,
wrap.
@@ -120,10 +120,10 @@ Support, Manual">
dot,
eol,
qp,
-wrp,
-qpwrp.
+qpwrp,
unb64,
unqp,
+wrp.
@@ -142,6 +142,8 @@ Support, Manual">
Socket
+bind,
+connect,
_DEBUG,
dns,
gettime,
@@ -171,6 +173,7 @@ Support, Manual">
getpeername,
getsockname,
getstats,
+listen,
receive,
send,
setoption,
From 12bde801f6a5d3a192dee29dda1266108aa98d45 Mon Sep 17 00:00:00 2001
From: Sam Roberts
Date: Mon, 24 Oct 2011 11:24:58 -0700
Subject: [PATCH 15/23] Document dirty, getfd, and setfd for select and tcp.
---
doc/reference.html | 3 +++
doc/socket.html | 4 ++++
doc/tcp.html | 60 ++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 67 insertions(+)
diff --git a/doc/reference.html b/doc/reference.html
index f13a6bd..edffc40 100644
--- a/doc/reference.html
+++ b/doc/reference.html
@@ -170,12 +170,15 @@ Support, Manual">
bind,
close,
connect,
+dirty,
+getfd,
getpeername,
getsockname,
getstats,
listen,
receive,
send,
+setfd,
setoption,
setstats,
settimeout,
diff --git a/doc/socket.html b/doc/socket.html
index f096e4b..4d44f01 100644
--- a/doc/socket.html
+++ b/doc/socket.html
@@ -217,6 +217,10 @@ method or accept might block forever.
it to select, it will be ignored.
+
+Using select with non-socket objects: Any object that implements getfd and dirty can be used with select, allowing objects from other libraries to be used within a socket.select driven loop.
+
+
diff --git a/doc/tcp.html b/doc/tcp.html
index 602c73c..ab70f04 100644
--- a/doc/tcp.html
+++ b/doc/tcp.html
@@ -507,6 +507,66 @@ This is the default mode;
This function returns 1.
+
+
+
+master:dirty()
+client:dirty()
+server:dirty()
+
+
+
+Check the read buffer status.
+
+
+
+Returns true if there is any data in the read buffer, false otherwise.
+
+
+
+Note: This is an internal method, any use is unlikely to be portable.
+
+
+
+
+
+master:getfd()
+client:getfd()
+server:getfd()
+
+
+
+Returns the underling socket descriptor or handle associated to the object.
+
+
+
+The descriptor or handle. In case the object has been closed, the return will be -1.
+
+
+
+Note: This is an internal method, any use is unlikely to be portable.
+
+
+
+
+
+master:setfd(fd)
+client:setfd(fd)
+server:setfd(fd)
+
+
+
+Sets the underling socket descriptor or handle associated to the object. The current one is simply replaced, not closed, and no other change to the object state is made.
+
+
+
+No return value.
+
+
+
+Note: This is an internal method, any use is unlikely to be portable.
+
+