Ennetech by Erasmus and Kinkajou AuthorsKinkajou Tells It True


Kinkajou Tells You What Really Happened. The Truth Is Out There!

SciFi on the Net >> >> Sprites






There are many legal and financial issues limiting the growth of the internet.

Money is critical.













If your circumstances are difficult,
any donation is appreciated
because it lets us keep on working.

 We need your help and support
to keep on going.

Our Sites are run on voluntary donations.


Because we need your help
to survive & keep working





































My most general impression of the progress of the Internet revolution, even now, is that new implementations and growth are limited not by the technology, but by social rules (as defined by law) and issues such as finance.

The rules which we have made to define how we can all work together – have defined how we can work together – but perhaps not defining how we can work together optimally. The rules which define our world, by necessity arise from an earlier era. That is a pre-Internet era. They have ended up being used in a way which we would never have expected them to be used.

Should Internet information be governed by “telecommunications rules” or by information service rules? (Different sets of laws with substantially different implications govern these situations). To what extent can information be sold, as for example in enabling location marketing? Should companies such as Google be allowed to control the information superhighway to the extent they do?

(Note the example of the Spanish government, legislating that Google news should be required to pay license fees to the owners of information indexed by their search engines. When Google dealt with this situation by excluding Spain from its news indexing service, the information owners, namely the news media companies attempted to reverse the impact of this law, because they were losing volume.)

SigmaPsiSkynet ultimate internet


Other issues besides copyright relating to the Internet include:

Net neutrality

Data and Online privacy

Online contracts

Online marketplaces

International online companies

Deleting data

Internet lies

Internet porn

Garcia v. Google: claiming copyright just for being part of an image background

Community Power



Kinkajou..Kinkajou Tells It True

In olden days, when there was a substantial cost to the replication of information, information such as distributed through the news media – newspapers, magazines, TV or radio; had a substantial value. Today we are becoming ever more aware that information can be replicated very easily, bypassing many of the traditional rights of an owner.

This means that an owner of information (including images or video), can no longer charge whatever they want for access to their product. They need to charge a price which is a compromise between what they would like to charge and what the market will pay for the convenience of access without going to the trouble of replicating or stealing the information.

This brings us to our next issue. The biggest asset and the biggest problem of the Internet is that content is free. People can post any information which can be accessed and used by anybody. However of course when there is usually no capacity for financial remuneration involved, there is also very little incentive to create and present quality products.




These types of issues are a product of our existing rules, as they come into conflict with the Internet era. Perhaps one solution is that phone line companies or ISPs be required to pay a “copyright” payment to the owners of websites accessed. All too often for many website owners the only money earner is advertising or through selling things. Telcos (phone line companies) or ISPs are perhaps the only guaranteed to be better off agency in our use of the Internet.

They could make sure that owners get paid. As a justice issue, they have built their business on moving other people’s property: albeit intellectual property.

There is the issue of to what extent we wish to build an “Internet”. To have a network of devices publicly and privately, we require a network of bandwidth publicly and privately. But who pays? For example, Wi-Fi is often provided by many merchants as part of the usage of a commercial establishment such as a coffee house – Starbucks most notably as a famous example. (You buy their coffee and you get to use the Internet for free).

Perhaps we should be legislating that 9600 baud access be available as a general community right. This will enable poorer members of society to obtain access to information on the Internet as well as enable public devices such as sprites to communicate via a network as well. This might not work too well if web site builders insist on including large graphic or video files. Technology: namely the web software must be able to deliver a concise code to enable everyone to access information.


Sprite SigmaPsiSprite


If sprites are to form a part of our world, they need a communication highly to carry their signals. In short the community needs to create a structure in the community to let these tools work for the community. Free data net transfer may be essential to allow these mechanisms to plug into the internet.

Spider Communication Bot Spider Communication Bot
Spider Communication Bot

Conceivably the day will come when sprites explore deep space, provide measurements through sensors on planets and moons and asteroids throughout the solar system, explore the deep oceans, watch for poachers in our national parks, act as CCTV replacements in assisting security, and act as weather measurement devices in climate change.

Our sprites could monitor animals, even down to members of the school of fish, a flock of birds or a herd of mammals. It is even possible that the sprites could deliver microscopic devices capable of being incorporated into a developing brain, and so capable of controlling or influencing the actions of macro creatures.


One particular dream haunts me. A young man with his friends goes to war carrying a basket. Inside the basket are a collection of insects. “Spiders!”

 T spiders, C spiders, D spiders, E spiders: T spiders being genetically engineered spiders capable of organically creating micro terminals for optical communications, C spiders being genetically engineered spiders capable of creating organic communication microfilaments for communications interlinking, D spiders being genetically engineered spiders capable of creating holographic displays and E spiders being genetically engineered spiders capable of genetically constructing grids/sheets for capturing light energy and feeding it into the network. Organic bots create the infrastructure for war communications.

The Holy Grail of such interconnected technology would be the ability to genetically engineer organisms capable of self-replication and interconnection. Not only can we create pseudo-life such as sprites, but they can continue to create more of themselves, insinuating them more and more into a greater and greater number of tasks which we as the coordinating species would require of them.

Mission Flier Mission Flier: DragonFly


In such an interconnected world, crime would be essentially impossible. Crime becomes more a list of infractions which are listed and for which a person would be fined. No offence would be too small to be recorded.



The structure of information access also creates significant but highly important distortions in the structure of institutions such as the news media. Newspaper publishers sell newspapers. In short the newspaper itself is the product. But in a world where people have enhanced access to information, it may well be the writer him or herself who is the marketable commodity, not the newspaper. For example, people may learn to follow a particular writer or author and their articles rather than a newspaper in general. Power begins to defer to individual people rather than to institutions such as newspapers or magazines.

As a final note, I believe it to be sobering to realise that it is the social rules and financial issues that govern the growth and applicability of the technology to humanity, and not the actual limitations of the technology itself.



What does this technology remind me of in Brisbane? :





Best examples from Science Fiction referring to this technology:

Probably even more important other changes would result from people being connected to information 24/7. Neil Asher breached this possibility in his novel “Grid Linked”. People who are grid linked for years become addicted to extracting information quickly and directly from computer systems, becoming increasingly alienated from normal human interactions at normal human speeds.
Cormac Brain Liked Internet Agent SigmaPsiCormac

Interestingly authors such as Donald Kingsbury in his book “Psychohistorical Crisis” presages the appearance of increasing processing power at the individual human level to supplant the pricing power available in being connected to the net. He describes the development of the “FAM” which is integrated into the developing human brain allowing the brain to access many computer type processes directly, enhancing and supporting its own distinct type of neural processing.
EronOsaPsychicProbe SigmaPsiEron Osa


Unfortunately the Internet in science fiction is used often in a very mundane fashion. Many of the proposals actually belong in the past not in the future(Some of the books are old treasures: so they actually did have a pretty good go of guessing where it all might go, pre-internet).. Examples:

Gordon R Dickson: the Dorsai series and subsequent. Earth builds an integrative computer to hold the knowledge of the human planets.

Cletus Genius Military Commander SigmaPsiCletus Graham

Earthwide Information in Globalisation Earthwide Information in Globalisation


Orson Scott Card: Enders Game.  A Program searches databases on the home planet (earth) seeking a picture to provide Ender with a personal link to a computer game. Some of the characters access the Internet and become Avatars.
Ender Wiggin Fleet Commander SigmaPsiEnder


Enders Game Fleet Enders Game Fleet


Ian Douglas: Heritage Trilogy. Computer AIs are an integral part of the Armed Forces. Some are more advanced than others. As usual military issue equipment is substantially behind the times – the need for standardisation driving “safe” software.
JackMarine SigmaPsiJack (John Charles Ramsey)

Computer AI  aka Matrix in another Universe
Computer AI aka SigmaPsiMatrix in another Universe


Brian Stableford: Hooded Swan series. New Alexandria aims to lead a scientific revolution. Humanity’s knowledge has stagnated due to the flow of people into the galaxy from the central worlds. New Alexandria hopes by accumulating knowledge to change this trend.
Grainger and The Wind Starship PilotsSigmaPsiWind”


I think the truly exciting developments are

 Neal Asher: Grid Linked and related books. Humans are becoming integrated with information sources and other machine-based enhancers. The Grid is the Internet of the future.
Cormac Brain Liked Internet Agent SigmaPsiCormac

Lee Hogan: Belarus. Sprites represent one of the most exciting future technologies. They are Internet-based.

Sprite SigmaPsiSprite



Clever New Applications:

The computer sector is the most actively driven sectors of technological development today. And finance is the engine which drives this development. Because the sale of computer technology is so profitable, substantial intellectual and other resources are available development of new ideas. Consequently, I think is very hard to suggest clever new applications that have not already been thought of.

My picks from everyone else’s clever new applications –

  • Bayesian / heuristic programming

  • AI software

  • The Sprite – alluded to originally by Microsoft as the “Web of Embedded Devices” or the “Web of Things”. I think though that sometimes a clever new name can drive as much development as a good idea. The name “Sprite” engages the attention and imagination to my mind far more than “embedded devices” and “web of things”.

  • Interface apps for sprites.

  • Complex search apps for search engines such as Google or Internet Explorer: download a data cluster with a simple search and then undertake a complex round of programming to narrow down information to more appropriately match your needs. A number of terms can be used already within the search engine – search window.

  • World trade: Amazon, Ali express, services marketplaces, pharmaceutical marketplaces, et cetera the world is now potentially at the average person’s fingertips.

  • Porn: like it or not, this is one of the engines that drives the Internet. People are embarrassed to display their interests. The Internet allows the distribution of these items across a national border, bypassing any censorship scheme, at any time of day or night. At a most basic level there is a need for goods coding/standardisation. My idea would be stick figure coding such as:

    Porn the killer internet app Porn: the killer internet app

  • Porn: the ability to change a face or hair colour will change porn forever. In the digital age there is no reason why something that is produced can’t be altered. Face software should easily be able to; on-the-fly replaced the original star’s face with a chosen one with a chosen hair colour or cut. Composite pictures will one day be here to stay.

  • Encryption/VPN – a method of ensuring privacy.

  • Improved or Advanced Query Parsing: using a software interface to tell the search engine what to search for. A bit more than the usual command line interface of Google and most search engines.


Query Parsers

A query parser stands between you and the documents you are seeking and translates your search string into specific instructions for the search engine. A search application cannot reach its peak performance without intelligent query parsing, which allows for relevancy customization, additional security-trimming and taking input from user interface variables or outside data sources. 

We generally get these goodies by choosing our search engine. Types of search engine searches include:

Obsessive in their attention to detail:
e.g. finding only the words A and B when they appear in the same sentence in an article.

Semi-intelligent:  delivering documents as above defined but also some that seem to be also roughly/ not exactly related to your search criteria.

Agent Worker: act like expert searchers on the behalf of the user. They deliver all of the documents with A and B, and decide decided to sort them such that documents with A and B in the title are first, most relevant documents first and least relevant documents later

I can see how important it would be to match the needs of the user and their search engine agent. The types of queries which are entered by librarians, lawyers, or PhDs will be very different to the types of queries many average users would enter.

Minerva Computer made Woman SigmaPsiMinerva Computer made Woman

SigmaPsiSkynet AI SigmaPsiSkynet AI Mocking Program AI SigmaPsiMocking Program AI

Unfortunately, most text search products don't provide any flexibility in the query language.

So what are all the things you can do with a query parser?
1. Identify What to Search For
Step one of query parsing is to identify what words get searched. This is not as simple as it might seem. 

At a minimum, the query parser will need to identify what is a query term. Is F-111 two terms or just one? Is the dash important? What about F1 11 ? What about F111?  What about when "F-111" is enclosed in double-quotes? Does that mean something different? 

Of course all of these decisions must be made in reference to the indexer. The query parser can only search for terms that the indexer has decided to index. And so you may wish to search for +/- 5 miles, but it is likely that the characters "+/-" are not anywhere to be found in the index. 

2. Parse The Query Language Itself 

Most obviously, the query parser must parse the query language itself. This includes recognizing and interpreting operators (AND, OR, +, -, NOT, etc.) grouping operators, and field restrictors. 

Even Google can now execute queries like this: 

music -video (singer OR songwriter) site: Utube.com 

Depending on the end-user, the query language can either be very simple (just a string of tokens, perhaps with double-quoted phrases) or quite complex (full Boolean syntax with nested expressions, proximity operators, and term weighting). Or it can be a hybrid – like Apache - Lucene. 

3. Provide Access to Search Engine Features 

Some query languages are never intended to be used by end-users. Oracle Text and FAST FQL fall into this category. Instead, these languages are intended to be used by other programmers for accessing all features provided by the search engine. 

Ideally, all search features which can be processed by the engine are available somehow, but this is often not the case without customization. For example, there are many (many!) more search features available in the Lucene query engine than can be expressed by the standard Lucene query syntax. If you want to take advantage of these additional features, you are must create your own query parser. 
Oracle Matrix Mother   SigmaPsiOracle

4. Search for Other Things 

When you search for "car”, do you also want to find "cars"? How about when you search for "mouse", should you also find "mice"? 

This can get quite complex. For example, a search for "blackjack" may give "sphalerite" and "mock lead" (and vice-versa). Such queries are often useful to help naïve end-users searching over vertical domains (within the same web site) with which they may be unfamiliar. 

And don't forget fuzzy searching. If your data is dirty or often misspelt, this can be critical. One example: finding 14 variations of the word "Kawasaki" including things like "akawasaki", "Kawasaki", and "hawasaki". 

5. Aid in Relevancy Ranking 

In the role of translator, query parsers will often make adjustments to the query entered by the user to help retrieve documents better. For example, documents which contain search terms in the title or abstract may be considered to be more relevant. 

There are many kinds of relevancy ranking tricks and techniques that a query parser can employ as long as the underlying engine is powerful enough. For example, query parsers can boost documents which contain all of the terms close together (proximity weighting) or boost documents from friendly web sites while reducing documents from un-friendly sites. 

If you also have control over how the documents are indexed, then the sky is the limit. You can do sophisticated things such as boosting home pages, boosting documents recently published, boosting documents which contain key domain-specific terminology, or performing fine-grain control over what types of queries are strong hits for each individual document. 

For most of us, it’s probably not worthwhile trawling an ocean: such as the internet to classify everything out there using our own design of index criteria. However, we should aim to get the most out of the search engines for which we do use regularly.