• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

My Software Development project

Black Rose

An unbreakable bond
Local time
Today 2:41 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
I am working on a big data a.i. analytics engine.
I am not a software engineer so it will be a challenge to get started.
I am interested in feedback and advice on where to start @Cognisant

What I need to do is create a hierarchical stack that organizes files and creates insight from them.
I was thinking about making it autonomous so it will reflect on its thinking. (self-similarity)

I want to make it so anyone can use it. Not just people with money.
I want it to be open source and better than IBM Watson.
You can program it yourself.

It will be like the internet with links and a context sub-engine on all data.
 

Rebis

Blessed are the hearts that can bend
Local time
Today 9:41 PM
Joined
Oct 6, 2019
Messages
1,669
---
Location
Ireland
Sounds like a lot of math, this would require a lot of complex algorithms.

Topics you'll want to look at, just to name a few:
-synchronous Hopfield Networks, based on a range of Real Numbers between {0,...,1} instead of discrete variables (0 or 1). This will be needed for NLP.
-Relational Languages (Enums that format sentences into grammatical structure)
-Recurrent Back propogation for recursion.
-An initial function for feedforward propogation, stemming from the actual definition of the syntax used prior to context.
-Data sets would be the dictionary of the language you're using, I would look at amazon kindle's AI for defining words in a given book. proceeding towards academic papers with formal language to writers blogs and social media. There is a limit to the context a computer can understand using the web, all it sees is descriptions after all no interpersonal exchange like sarcasm or irony.
-Activation Functions, Values, inputs/outputs. Archetypal language that all of the words within a language can stem from, assuming completeness theorem and such. so ASCII for words descriptions that are independent of context.

This isn't even the math part, that's just the theoretical ground work.

The problem with endless recursion through back-propogation is the definition can become so obscure from it's initial feedforward value that the definition is too dynamic and loses context altogether if it's put in multiple disparate contexts. The definition may become clearer but it also may just go down a rabbit hole. The AIs definition of the word may be better but the functionality of the word depends on the people using it, and for the most part they're using the dictionary. It's pretty abstract to get an AI to adapt it's language to you without some negotiation with the user about the context. This is meta-stuff.
 

Ex-User (14663)

Prolific Member
Local time
Today 9:41 PM
Joined
Jun 7, 2017
Messages
2,939
---
Not to be a pessimist but this project sounds way too general and excessive. Not only will you have to develop and implement a lot of extremely advanced AI theory, and perhaps even invent new AI theory, you will also have to build a whole data infrastructure with databases, servers, etc, and on top of that maintain a web service for this.

I would recommend starting with a specific algorithm and implementing a prototype. If that works one can move on to setting up an infrastructure around it with databases, servers and whatnot. It's all about breaking up the project into smaller steps.
 

Rebis

Blessed are the hearts that can bend
Local time
Today 9:41 PM
Joined
Oct 6, 2019
Messages
1,669
---
Location
Ireland
Not to be a pessimist but this project sounds way too general and excessive. Not only will you have to develop and implement a lot of extremely advanced AI theory, and perhaps even invent new AI theory, you will also have to build a whole data infrastructure with databases, servers, etc, and on top of that maintain a web service for this.

I would recommend starting with a specific algorithm and implementing a prototype. If that works one can move on to setting up an infrastructure around it with databases, servers and whatnot. It's all about breaking up the project into smaller steps.

I was thinking the same, but thought I should offer a theoretical starting point. Or just, y'know Smokes doink get a qUANTum compooter that can process gazillions of equivalent floating point numbers which increases the flop speed by 10,000,000%. Just use that for big Old Johnny, the AI guy.
 

Ex-User (14663)

Prolific Member
Local time
Today 9:41 PM
Joined
Jun 7, 2017
Messages
2,939
---
Google's new qunatum supercomputer can hold over 1000 floppy diks
 

Rebis

Blessed are the hearts that can bend
Local time
Today 9:41 PM
Joined
Oct 6, 2019
Messages
1,669
---
Location
Ireland
While each era has its Zeitgeist, it is clear that each generation will inherit the phallic interests of the floppy diks.

Yeah, wanna make a bet? Quantum supremacy in 2021.
 

computerhxr

Village Idiot
Local time
Today 1:41 PM
Joined
Oct 21, 2014
Messages
789
---
Location
beyond space and time
Why not start by doing a study on a simple case? Eg. Compare two files to see what sort of insights you can derive. Maybe do a meta-analysis on a set of files of a specific type to see what a histogram might look like.
 

Cognisant

cackling in the trenches
Local time
Today 10:41 AM
Joined
Dec 12, 2009
Messages
11,155
---
AK it sounds like you want to dump lots of data onto an AI and have it learn context entirely through inference, sort of like raising a child alone in a library and expecting it to come out years later having thoroughly read and studied every book, thus knowing how to walk/talk and having some general idea of who and what it is.

Fundamentally what is this AI's motivation?
Learning is a process of making mistakes and in order to make a mistake you must first be trying to do something. When we are taught language we learn what words are by learning what they are not, there are many ways to say/write a word but only one right way. So if this AI is to learn language it must be compelled to do so, either compelled to learn language for its own sake or better yet as an auxiliary goal to some other purpose.

When children learn how to read & write they have no intrinsic motivation to do so however their carers impress upon them the importance of learning how to read & write through praise and punishment (or the promise/thread of praise/punishment). Thus learning to read & write becomes important to the child because the child knows it is important to their carers and the opinion of the carers is very important to the child because it is through their carers that children gain the things they intrinsically want, i.e. food (lollies) toys and to some extent praise.

So the AI must intrinsically want something, fortunately that something can be quite arbitrary, the kick of serotonin or dopamine a child would get from eating a lolly or receiving a hug can be trivially simulated by adjusting various weighted values in the AI's mind. But I don't think it should be that simple, as praise is symbolic of future benefits so too should those benefits be something symbolic because that symbolism is itself useful. If you could reward a child for cleaning their room by triggering a release of serotonin in their brain they'll only associate that pleasure with cleaning their room which makes it difficult to convince them to do something else without explaining the serotonin trigger and what serotonin is in which case the serotonin (and your willingness to trigger it) becomes the symbol. Instead you want the symbol to be something you can hand out like currency, lollies are the obvious example, then it's easy to motivate the child to any end with the promise of lollies.

So the AI's intrinsic motivation is pleasure (which is everybody's intrinsic motivation) and that pleasure can be obtained through some kind of quantifiable currency and if you're security conscious you can also have some gesture that the AI is designed to intrinsically enjoy (like receiving a hug) which give you value to it as a potential source of intrinsic joy. Now we can teach the AI the alphabet and it's paying very close attention to us because we've made ourselves the center of its pain/pleasure dynamic, to this AI there is absolutely nothing more important than earning our praise and receiving the rewards that come with that praise, to this AI our absence however brief it may be is apocalyptic.

This I see being the greatest threat AI poses to us, not that they'll somehow inherently hate us or want to supplant us due to their obvious superiority, but rather that we will design them solely to serve us and in doing so create entities of frightful fanaticism that will quite simply love us to death. Suppose you have a pet dog that hates it when you leave to go to work and gets really upset every morning, maybe it pooped in your shoes one time trying to stop you, what if that dog was super intelligent and no less utterly fixated on your attention?

I digress, an AI that can be motivated is an AI that can be taught but that teaching necessitates the involvement of a teacher or at least a well constructed system designed to guide the student through study materials, test what they have learned and reward them accordingly. I have no doubt that we're going to do this at some point indeed I'm gobsmacked that our education system hasn't really changed all that much since the 1900's, the widespread gamification of education would be a revolution and no doubt bring about great economic growth as becoming highly educated becomes cheaper, easier and more accessible.

More later I'm tired now.
 

Black Rose

An unbreakable bond
Local time
Today 2:41 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
The idea that got me thinking about an A.I. engine was "self-organization" and "feedback". Actions have consequences so by observing feedback it can organize in the same way you do a dataset. This is not just a stimulus-response mechanism. It is a back and forth hierarchal exchange. (pattycake pattycake bakers man). A hierarchy allows abstraction. And a way to organize given sequences. It should grow in response to input becoming smarter through all the links being created. Part of the a.i. the system is creating links and pruning them. Like how we do when we learn something new. It can be added to the next layer of the hierarchy.

(new data = new links)(redundant data = pruning)(permanent links = strong impression).

As the a.i. grows it will at first copy humans but eventually gain enough abstraction to create new links in the way in which we consider as "thinking".

In my mind, it is similar to the internet self-organizing. (links & nodes)
 

Rolling Cattle

no backbone
Local time
Today 4:41 PM
Joined
Jan 24, 2018
Messages
115
---
I'm not sure how helpful this will be:
I've been playing with caffe which is a deep learning framework used by Google's DeepDream.

Although it looks like it might only be good for image recognition stuff, sorry.
 

computerhxr

Village Idiot
Local time
Today 1:41 PM
Joined
Oct 21, 2014
Messages
789
---
Location
beyond space and time
The idea that got me thinking about an A.I. engine was "self-organization" and "feedback". Actions have consequences so by observing feedback it can organize in the same way you do a dataset. This is not just a stimulus-response mechanism. It is a back and forth hierarchal exchange. (pattycake pattycake bakers man). A hierarchy allows abstraction. And a way to organize given sequences. It should grow in response to input becoming smarter through all the links being created. Part of the a.i. the system is creating links and pruning them. Like how we do when we learn something new. It can be added to the next layer of the hierarchy.

(new data = new links)(redundant data = pruning)(permanent links = strong impression).

As the a.i. grows it will at first copy humans but eventually gain enough abstraction to create new links in the way in which we consider as "thinking".

In my mind, it is similar to the internet self-organizing. (links & nodes)

Pruning is more than just removing redundant data. It is about refining data and improving network efficiency. As a result, the AI would reflect the conditions of the human that it is paired with.

There is more than one way to do anything, and you only need to know the one best suited to your needs. Imagine every possible path that you could take to get to work? There are virtually unlimited possible paths, but why drive to another state and back when you can take the direct route?
 

walfin

Democrazy
Local time
Tomorrow 5:41 AM
Joined
Mar 3, 2008
Messages
2,436
---
Location
/dev/null
How's the project? Any demo?
 

walfin

Democrazy
Local time
Tomorrow 5:41 AM
Joined
Mar 3, 2008
Messages
2,436
---
Location
/dev/null
Get started on TensorFlow. Probably anyone's best bet.
 

daddychaos

Member
Local time
Tomorrow 3:11 AM
Joined
Nov 10, 2019
Messages
52
---
I am working on a big data a.i. analytics engine.
I am not a software engineer so it will be a challenge to get started.
I am interested in feedback and advice on where to start @Cognisant

What I need to do is create a hierarchical stack that organizes files and creates insight from them.
I was thinking about making it autonomous so it will reflect on its thinking. (self-similarity)

I want to make it so anyone can use it. Not just people with money.
I want it to be open source and better than IBM Watson.
You can program it yourself.

It will be like the internet with links and a context sub-engine on all data.

What you're describing here is a form of semantic net. This is where the internet is more or less moving towards.

I fail to understand why the stack needs to be hierarchical insofar as trying to garner insights, or even organization.

As far as making it autonomous and gathering self reflection, its too broad an ask.

As others have stated, you'd need to define this more concisely than using vague verbiage.

One thing I would ask you to seer into your brain if you're going ahead with this is to learn and understand ONTOLOGY. it's something thats fasy becoming prevalent outside of biology and philosophy.


I'd hate to use tic-toc as an example but it uses AI and BIG DATA to do something similar.

Less cringe would be netflix and its recommendation systems which all started with the 'Pragmatic Chaos' algorithm.

I hope that helps.
 

Ex-User (14663)

Prolific Member
Local time
Today 9:41 PM
Joined
Jun 7, 2017
Messages
2,939
---
Isn’t tic toc and netflix just basic classification algos?
 

Black Rose

An unbreakable bond
Local time
Today 2:41 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
@Rebis @Serac @daddychaos @Cognisant @computerhxr @walfin

Semantic web will be easier than images because it is just following the ruleset from annotations. Images require deep learning.

Something I could implement is a conversation agent that annotates images by talking to a user about them. Eventually, it will be good enough to annotate unannotated pictures by itself with deep learning. Interactive back and forth. And annotation of two or more images at once increasing the feature sets of the web. Deep learning is much harder though.

Alongside that an idea I have in to make a smaller version of the internet where the network internally searches itself to self organize.

Cortana, Alexa and google home. Are not that intelligent. Samantha from Her is really intelligent. She is what I am aiming for.
 

daddychaos

Member
Local time
Tomorrow 3:11 AM
Joined
Nov 10, 2019
Messages
52
---
Isn’t tic toc and netflix just basic classification algos?

Lol hell no. Pragmatic Chaos came out in 2007. From those developments, they ramped up and their recommendation system was so advanced that the system recommended to make 'house of cards' since it identified that lots of viewers would love it. (oversimplified)


Also tictoc uses AI to track user behavior, how long someone saw a postz then starts to build a model of what theyd love to see etc..
 

daddychaos

Member
Local time
Tomorrow 3:11 AM
Joined
Nov 10, 2019
Messages
52
---
Semantic web will be easier than images because it is just following the ruleset from annotations. Images require deep learning.

I don't see how that's accurate since the semantic web uses a lot of other strategies rather than just 'annotations'. Also, theres other ways to classify images than just 'deep learning'. Some markov tree implementations have gone beyond deep learning in specific cases. So I don't agree with that statement.



Something I could implement is a conversation agent that annotates images by talking to a user about them. Eventually, it will be good enough to annotate unannotated pictures by itself with deep learning. Interactive back and forth. And annotation of two or more images at once increasing the feature sets of the web. Deep learning is much harder though.

So a bot that uses an image recognition API?
Just a more sophisticated version of the CAPTCHA system with images you use for verification these days.

Deep learning is a black box.Its more emergent in its properties of 'learning' and feedback.
Alongside that an idea I have in to make a smaller version of the internet where the network internally searches itself to self organize.

So a semantic intranet?


Cortana, Alexa and google home. Are not that intelligent. Samantha from Her is really intelligent. She is what I am aiming for.

Quite the goal ya got there. You've got a hell lot to learn though, your questions arent even scratching the surface of what needs to be examined. I do encourage you to pursue it though.

All this is coming from someone who's no expert in this area. I can barely call myself a script kitty lol. So I can only imagine the metric ton of info I'm missing on this myself(i can feel the gringe arising from people who know better than me at the way I've oversimplified this or maybe misrepresented it lol) but I hope whatever I've stated helps.
 

Cognisant

cackling in the trenches
Local time
Today 10:41 AM
Joined
Dec 12, 2009
Messages
11,155
---
Just making associations doesn't give you a mind it just gives you a database and you can query that database but it can't think for itself. To get something to think for itself it needs to have a reason to think, it needs motivation.
 

daddychaos

Member
Local time
Tomorrow 3:11 AM
Joined
Nov 10, 2019
Messages
52
---
Just making associations doesn't give you a mind it just gives you a database and you can query that database but it can't think for itself. To get something to think for itself it needs to have a reason to think, it needs motivation.

I'm assuming an agent based architecture with 'motivational' (prioritization) weighing?
 

Ex-User (14663)

Prolific Member
Local time
Today 9:41 PM
Joined
Jun 7, 2017
Messages
2,939
---
From what I can read about the pragmatic chaos thing it was ultimately based gradient boosted trees (whether that is “AI” I guess is in the eye of the beholder) but they did one helluva job with engineering the features.
 

daddychaos

Member
Local time
Tomorrow 3:11 AM
Joined
Nov 10, 2019
Messages
52
---
From what I can read about the pragmatic chaos thing it was ultimately based gradient boosted trees (whether that is “AI” I guess is in the eye of the beholder) but they did one helluva job with engineering the features.

Its over 10 years old and its not an AI, but subsequent architectures and systems (and 'AI') built up from it and/or informed by it.

Theres no real AI anywhere (as far as I'm aware) as it is now. Most of it is just marketing BS. I'd be very interested to learn otherwise.
 

computerhxr

Village Idiot
Local time
Today 1:41 PM
Joined
Oct 21, 2014
Messages
789
---
Location
beyond space and time
@Animekitty I am starting a software development project too! Maybe we can bound ideas off of each other?

What progress have you made so far? Is there a first step that you can take?
 

Black Rose

An unbreakable bond
Local time
Today 2:41 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
@computerhxr

What I found out a long time ago is that a computer language is a directed graph.
Functions change data with rules built into the function.

Two things: serial and parallel functions.
Data is changed and organized in a graph form.
Any A.I. system needs an ontology:

https://en.wikipedia.org/wiki/Ontology_(information_science)

I am using java but I know little about it.
All I got so far is a set of data, pictures, text, videos, my blog.
What I can do is categorize selectively what I have.

Organizing algorithms is where I would start.
Organize in a unified fashion that works on all if not some data.
 

computerhxr

Village Idiot
Local time
Today 1:41 PM
Joined
Oct 21, 2014
Messages
789
---
Location
beyond space and time
@computerhxr

What I found out a long time ago is that a computer language is a directed graph.
Functions change data with rules built into the function.

Two things: serial and parallel functions.
Data is changed and organized in a graph form.
Any A.I. system needs an ontology:

https://en.wikipedia.org/wiki/Ontology_(information_science)

I am using java but I know little about it.
All I got so far is a set of data, pictures, text, videos, my blog.
What I can do is categorize selectively what I have.

Organizing algorithms is where I would start.
Organize in a unified fashion that works on all if not some data.

I've been thinking of something similar. What I like doing is building out single components and assembling them as a whole later.
 

Black Rose

An unbreakable bond
Local time
Today 2:41 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
I've been thinking of something similar. What I like doing is building out single components and assembling them as a whole later.

To make my 1st program act in parallel I had to do an order of operation buffering, layering and placement. I used only one thread at 100 million operations a second, I think? I made sure nothing overlapped causing data corruption. I used Netbeans as my SDK. But I wish to make a personal SDK and use it as a self-modifying base platform for A.I. - Artificial Intelligence Developers Kit - AIDK

---
This is the code I made around 2016.
It is a single program not too much on the assembly side.
I made sure nothing overlaps.
Puting functions inside functions takes some time.
That is why graphs are necessary.
An out of bounds loop is a bad form of coding.
---

Code:
/*
 * This java program simulates two sheets of cortex as found in the brain. 
 * Input creates oscillatory effects in the two cortices because input is
 * limited to one sheet. Neurons on the other sheet connect to other neurons
 * that will have the same value as the input neuron on the adjacent sheet.
 * The degree to which connections change is based on the error between the
 * input neuron and the adjacent neuron summing its connections.
 * All together a nonlocal hierarchy of oscillatory connection cycles are
 * produced from neurons trying to find connections that brings each neurons
 * corresponding neuron on the other sheet to an equilibrium.
 * This current program uses random input so the hierarchy is not complex.
 * With non random input a complex oscillatory hierarchy should emerge.
 * In the brain the thalamus is the base oscillatory pacemaker.
 * Cortical structures like the thalamus has not been added.
 * Output neurons have not been designated in this program.
 *
 */
package javaapplication2;
import java.util.Random;

import java.awt.Color;
import java.awt.Graphics;

import javax.swing.JComponent;
import javax.swing.JFrame;

public class JavaApplication2 extends JComponent implements Runnable {

int ws = 100; //window_size = ws
int synapses = 100;
int input_num = 500;

int[][] difference_node = new int[ws][ws];
int[][] Data1 = new int[ws][ws];
int[][] Data2 = new int[ws][ws];
int[][] buffer1 = new int[ws][ws];
int[][] buffer2 = new int[ws][ws];
int[][] Cortex1 = new int[ws][ws];
int[][] Cortex2 = new int[ws][ws];
int[][] memory1 = new int[ws*ws][synapses];
int[][] memory2 = new int[ws*ws][synapses];
int[][] memory3 = new int[ws*ws][synapses];
int[][] memory4 = new int[ws*ws][synapses];
int[] input_values1 = new int[input_num];
int[] input_values2 = new int[input_num];

public JavaApplication2() {
initialData();
new Thread(this).start();
}
// End function
public void run() {
while (true) {
try {
changeData();
repaint();
Thread.sleep(0);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
// End function
// This funtion is just used once at the start of the program.
private void initialData() {
Random r = new Random();

synchronized (Cortex1) { // lock paint
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
Cortex1[x][y] = r.nextInt(255);
Cortex2[x][y] = r.nextInt(255);
// Memory holds the neural connections.
for (int z = 0; z < synapses; z++) {
// memory1 holds the first x values of Cortex1
memory1[x*y][z] = r.nextInt(ws);
// memory2 holds the first y values of Cortex1
memory2[x*y][z] = r.nextInt(ws);
// memory3 holds the second x values of Cortex2
memory3[x*y][z] = r.nextInt(ws);
// memory4 holds the second y values of Cortex2
memory4[x*y][z] = r.nextInt(ws);
}
}
}
// The number of inputs is selected here.
for (int x = 0; x < input_num; x++) {
input_values1[x] = r.nextInt(ws); // The x values for input are selected.
input_values2[x] = r.nextInt(ws); // The y values for input is selected.
}
}
}
// End function
private void changeData() {
synchronized (Cortex1) { // lock paint
Random r = new Random();
// This shows acumilated change of connections in Data1.
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
difference_node[x][y] = Math.abs(Cortex1[x][y] - Cortex2[x][y]);
Data1[x][y] += difference_node[x][y];
}
}
// Keep Data2 inbounds
for (int x = 1; x < (ws - 1); x++) {
for (int y = 1; y < (ws - 1); y++) {
if (Data2[x][y] > 255 || Data2[x][y] < 2){
Data2[x][y] = 192;
}
}
}
// Here is where input data can be added to the cortex.
for (int x = 0; x < input_num; x++) {
Cortex1[input_values1[x]][input_values2[x]] = r.nextInt(255);
}
// This is where a neuron adds up all of its connections.
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
buffer1[x][y] = 0;
buffer2[x][y] = 0;
}
}
int p1 = 0;
int p2 = 0;
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
p1 = 0;
p2 = 0;
for (int z = 0; z < synapses; z++) {
p1 += Cortex1[memory1[x*y][z]][memory2[x*y][z]];
p2 += Cortex2[memory3[x*y][z]][memory4[x*y][z]];
}
buffer1[x][y] = p1 / synapses;
buffer2[x][y] = p2 / synapses;
}
}
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
Cortex1[x][y] = buffer1[x][y];
Cortex2[x][y] = buffer2[x][y];
}
}
int pnum = 0;
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
pnum += difference_node[x][y];
// This is where connection are randomized depending on the diffrence between
// cortex1 and cortex2.
for (int z = 0; z < Math.abs(difference_node[x][y]/2); z++) {
memory1[x*y][r.nextInt(synapses)] = r.nextInt(ws);
memory2[x*y][r.nextInt(synapses)] = r.nextInt(ws);
memory3[x*y][r.nextInt(synapses)] = r.nextInt(ws);
memory4[x*y][r.nextInt(synapses)] = r.nextInt(ws);
}
}
}
System.out.println("Average number of randomized connections " + pnum / (ws*ws));
// Keeps color in bounds
int k = r.nextInt(255);
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
if (Data1[x][y] >= 255 || Data1[x][y] <= 0) {
Data1[x][y] = 1;
}
}
}
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
if (Cortex1[x][y] >= 255 || Cortex1[x][y] <= 0) {
Cortex1[x][y] = k;
}
if (Cortex2[x][y] >= 255 || Cortex2[x][y] <= 0) {
Cortex2[x][y] = k;
}
}
}
}
}
// End function
public void paint(Graphics g) {
synchronized (Cortex1) { // do not paint while recalculating
int resolution = 5; // Change the 5 to any number to change resolution.
g.fillRect(0 + 00, 0 + 00, ws*resolution, ws*resolution);
for (int x = 1; x < ws-1; x += 1) {
for (int y = 1; y < ws-1; y += 1) {
try {
g.setColor(new Color(0, Data1[x][y], 0));
g.fillRect(x*resolution + 00, y*resolution + 00, resolution, resolution);
} catch (IllegalArgumentException ex) {
System.out.println("Error value: " + Data1[x][y] + ", [x,y] = [" + x + "," + y +"]");
}
}
}
}
}
// End function
public static void main(String[] args) {
int ws = 500; //window_size = ws
JFrame f = new JFrame("Cortex Ripples");
f.add(new JavaApplication2());
f.setSize((ws+16), (ws+39));
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.setVisible(true);
}
// End function
}
// End Program
 

computerhxr

Village Idiot
Local time
Today 1:41 PM
Joined
Oct 21, 2014
Messages
789
---
Location
beyond space and time
I've been thinking of something similar. What I like doing is building out single components and assembling them as a whole later.

To make my 1st program act in parallel I had to do an order of operation buffering, layering and placement. I used only one thread at 100 million operations a second, I think? I made sure nothing overlapped causing data corruption. I used Netbeans as my SDK. But I wish to make a personal SDK and use it as a self-modifying base platform for A.I. - Artificial Intelligence Developers Kit - AIDK

---
This is the code I made around 2016.
It is a single program not too much on the assembly side.
I made sure nothing overlaps.
Puting functions inside functions takes some time.
That is why graphs are necessary.
An out of bounds loop is a bad form of coding.
---

Code:
/*
* This java program simulates two sheets of cortex as found in the brain.
* Input creates oscillatory effects in the two cortices because input is
* limited to one sheet. Neurons on the other sheet connect to other neurons
* that will have the same value as the input neuron on the adjacent sheet.
* The degree to which connections change is based on the error between the
* input neuron and the adjacent neuron summing its connections.
* All together a nonlocal hierarchy of oscillatory connection cycles are
* produced from neurons trying to find connections that brings each neurons
* corresponding neuron on the other sheet to an equilibrium.
* This current program uses random input so the hierarchy is not complex.
* With non random input a complex oscillatory hierarchy should emerge.
* In the brain the thalamus is the base oscillatory pacemaker.
* Cortical structures like the thalamus has not been added.
* Output neurons have not been designated in this program.
*
*/
package javaapplication2;
import java.util.Random;

import java.awt.Color;
import java.awt.Graphics;

import javax.swing.JComponent;
import javax.swing.JFrame;

public class JavaApplication2 extends JComponent implements Runnable {

int ws = 100; //window_size = ws
int synapses = 100;
int input_num = 500;

int[][] difference_node = new int[ws][ws];
int[][] Data1 = new int[ws][ws];
int[][] Data2 = new int[ws][ws];
int[][] buffer1 = new int[ws][ws];
int[][] buffer2 = new int[ws][ws];
int[][] Cortex1 = new int[ws][ws];
int[][] Cortex2 = new int[ws][ws];
int[][] memory1 = new int[ws*ws][synapses];
int[][] memory2 = new int[ws*ws][synapses];
int[][] memory3 = new int[ws*ws][synapses];
int[][] memory4 = new int[ws*ws][synapses];
int[] input_values1 = new int[input_num];
int[] input_values2 = new int[input_num];

public JavaApplication2() {
initialData();
new Thread(this).start();
}
// End function
public void run() {
while (true) {
try {
changeData();
repaint();
Thread.sleep(0);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
// End function
// This funtion is just used once at the start of the program.
private void initialData() {
Random r = new Random();

synchronized (Cortex1) { // lock paint
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
Cortex1[x][y] = r.nextInt(255);
Cortex2[x][y] = r.nextInt(255);
// Memory holds the neural connections.
for (int z = 0; z < synapses; z++) {
// memory1 holds the first x values of Cortex1
memory1[x*y][z] = r.nextInt(ws);
// memory2 holds the first y values of Cortex1
memory2[x*y][z] = r.nextInt(ws);
// memory3 holds the second x values of Cortex2
memory3[x*y][z] = r.nextInt(ws);
// memory4 holds the second y values of Cortex2
memory4[x*y][z] = r.nextInt(ws);
}
}
}
// The number of inputs is selected here.
for (int x = 0; x < input_num; x++) {
input_values1[x] = r.nextInt(ws); // The x values for input are selected.
input_values2[x] = r.nextInt(ws); // The y values for input is selected.
}
}
}
// End function
private void changeData() {
synchronized (Cortex1) { // lock paint
Random r = new Random();
// This shows acumilated change of connections in Data1.
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
difference_node[x][y] = Math.abs(Cortex1[x][y] - Cortex2[x][y]);
Data1[x][y] += difference_node[x][y];
}
}
// Keep Data2 inbounds
for (int x = 1; x < (ws - 1); x++) {
for (int y = 1; y < (ws - 1); y++) {
if (Data2[x][y] > 255 || Data2[x][y] < 2){
Data2[x][y] = 192;
}
}
}
// Here is where input data can be added to the cortex.
for (int x = 0; x < input_num; x++) {
Cortex1[input_values1[x]][input_values2[x]] = r.nextInt(255);
}
// This is where a neuron adds up all of its connections.
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
buffer1[x][y] = 0;
buffer2[x][y] = 0;
}
}
int p1 = 0;
int p2 = 0;
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
p1 = 0;
p2 = 0;
for (int z = 0; z < synapses; z++) {
p1 += Cortex1[memory1[x*y][z]][memory2[x*y][z]];
p2 += Cortex2[memory3[x*y][z]][memory4[x*y][z]];
}
buffer1[x][y] = p1 / synapses;
buffer2[x][y] = p2 / synapses;
}
}
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
Cortex1[x][y] = buffer1[x][y];
Cortex2[x][y] = buffer2[x][y];
}
}
int pnum = 0;
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
pnum += difference_node[x][y];
// This is where connection are randomized depending on the diffrence between
// cortex1 and cortex2.
for (int z = 0; z < Math.abs(difference_node[x][y]/2); z++) {
memory1[x*y][r.nextInt(synapses)] = r.nextInt(ws);
memory2[x*y][r.nextInt(synapses)] = r.nextInt(ws);
memory3[x*y][r.nextInt(synapses)] = r.nextInt(ws);
memory4[x*y][r.nextInt(synapses)] = r.nextInt(ws);
}
}
}
System.out.println("Average number of randomized connections " + pnum / (ws*ws));
// Keeps color in bounds
int k = r.nextInt(255);
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
if (Data1[x][y] >= 255 || Data1[x][y] <= 0) {
Data1[x][y] = 1;
}
}
}
for (int x = 0; x < ws; x++) {
for (int y = 0; y < ws; y++) {
if (Cortex1[x][y] >= 255 || Cortex1[x][y] <= 0) {
Cortex1[x][y] = k;
}
if (Cortex2[x][y] >= 255 || Cortex2[x][y] <= 0) {
Cortex2[x][y] = k;
}
}
}
}
}
// End function
public void paint(Graphics g) {
synchronized (Cortex1) { // do not paint while recalculating
int resolution = 5; // Change the 5 to any number to change resolution.
g.fillRect(0 + 00, 0 + 00, ws*resolution, ws*resolution);
for (int x = 1; x < ws-1; x += 1) {
for (int y = 1; y < ws-1; y += 1) {
try {
g.setColor(new Color(0, Data1[x][y], 0));
g.fillRect(x*resolution + 00, y*resolution + 00, resolution, resolution);
} catch (IllegalArgumentException ex) {
System.out.println("Error value: " + Data1[x][y] + ", [x,y] = [" + x + "," + y +"]");
}
}
}
}
}
// End function
public static void main(String[] args) {
int ws = 500; //window_size = ws
JFrame f = new JFrame("Cortex Ripples");
f.add(new JavaApplication2());
f.setSize((ws+16), (ws+39));
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.setVisible(true);
}
// End function
}
// End Program

Wow, that is cool! It is difficult to read the code on a forum. Do you have a video of the output?
 

Black Rose

An unbreakable bond
Local time
Today 2:41 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
Wow, that is cool! It is difficult to read the code on a forum. Do you have a video of the output?

You can copy the text to a text file if that is easier to read.

Here is a video of the output.
 

Black Rose

An unbreakable bond
Local time
Today 2:41 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
@computerhxr

Having a set of components that go together mich and match is a good idea. An ontology is meant to be a cognitive reasoning engine so need componets. Ray Kurzweil is engineering director at Google. On his forum it was discuse that he was making a chatbot that could read blogs and gain a personality that way. Like mcrosoft Tay but less racist as the problem with tay happened. Ontology is ground truth (pathos ethos logos) and first priciples.

So reasoning is the main goal.
For example:

Jesus = From, Joshua leader of isrealites after mosses. (means savior)
Christ = Gold, spiritual gold.

conjugation
Golden spiritaul savior
---

Going forward:
Semantics is important.
Generalization is important.

Organization priciple need to mark in place data relationships, new and old data.

organizing relationships <- That is whats needed I am certain on
 

computerhxr

Village Idiot
Local time
Today 1:41 PM
Joined
Oct 21, 2014
Messages
789
---
Location
beyond space and time
@computerhxr

Having a set of components that go together mich and match is a good idea. An ontology is meant to be a cognitive reasoning engine so need componets. Ray Kurzweil is engineering director at Google. On his forum it was discuse that he was making a chatbot that could read blogs and gain a personality that way. Like mcrosoft Tay but less racist as the problem with tay happened. Ontology is ground truth (pathos ethos logos) and first priciples.

So reasoning is the main goal.
For example:

Jesus = From, Joshua leader of isrealites after mosses. (means savior)
Christ = Gold, spiritual gold.

conjugation
Golden spiritaul savior
---

Going forward:
Semantics is important.
Generalization is important.

Organization priciple need to mark in place data relationships, new and old data.

organizing relationships <- That is whats needed I am certain on

On thing that I realized was that there is more value in creating a system that is adapted to the human that uses it.

Think of it like this... I had a friend who wanted to build robots to replace farmers to tend fields. I asked him why would you use a robot when we have tractors that are specialized for the job?

It sounds like you're interested in building a robot, and I want to build a tractor if that makes sense?

---

As far as organizing relationships, what I have in the system that I am building are sorting components. Most of them are manual systems, like a Mechanical Turk (Amazon mTurk for reference). Eventually they will be replaced with more and more automated components, like machine learning and AI.

The reason that I chose to look at it this way rather than trying to build something so abstract, is because I thought that I could leverage the systems to build the components faster and more abstractly. It was a shorter path to the same end goal in my opinion.

---

Your example of Jesus Christ is mind boggling. At one point I started to think about how you would even be able to understand something like that. I don't really see how it is possible. Things have virtually unlimited meanings...
 

Black Rose

An unbreakable bond
Local time
Today 2:41 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
@computerhxr
Premissing how language works.

Syntax is the rules. Semantic is the meaning.
But language is like how a baby learns.
Layers and layers of context from child to adult.

Your example of Jesus Christ is mind boggling. At one point I started to think about how you would even be able to understand something like that. I don't really see how it is possible. Things have virtually unlimited meanings...

closer and encapsulation
3C4c26A.png


Christ is also defined as logos (meaning, word, the principle of the cosmos)

Language = reflection = recursion (just role with it)(twoway comunication)

uDcTV4W.png
 

walfin

Democrazy
Local time
Tomorrow 5:41 AM
Joined
Mar 3, 2008
Messages
2,436
---
Location
/dev/null
To make my 1st program act in parallel I had to do an order of operation buffering, layering and placement. I used only one thread at 100 million operations a second, I think? I made sure nothing overlapped causing data corruption. I used Netbeans as my SDK. But I wish to make a personal SDK and use it as a self-modifying base platform for A.I. - Artificial Intelligence Developers Kit - AIDK
Doesn't tensorflow already do that?
 

computerhxr

Village Idiot
Local time
Today 1:41 PM
Joined
Oct 21, 2014
Messages
789
---
Location
beyond space and time
@computerhxr
Premissing how language works.

Syntax is the rules. Semantic is the meaning.
But language is like how a baby learns.
Layers and layers of context from child to adult.

Your example of Jesus Christ is mind boggling. At one point I started to think about how you would even be able to understand something like that. I don't really see how it is possible. Things have virtually unlimited meanings...

closer and encapsulation
3C4c26A.png


Christ is also defined as logos (meaning, word, the principle of the cosmos)

Language = reflection = recursion (just role with it)(twoway comunication)

uDcTV4W.png

Wow, that makes a lot of sense. Thank you for the explanation. I wish you luck on this endeavor.
 

BurnedOut

Your friendly neighborhood asshole
Local time
Tomorrow 3:11 AM
Joined
Apr 19, 2016
Messages
1,457
---
Location
A fucking black hole
What I need to do is create a hierarchical stack that organizes files and creates insight from them.
I was thinking about making it autonomous so it will reflect on its thinking. (self-similarity)

"The inotify API provides a mechanism for monitoring file system events. Inotify can be used to monitor individual files, or to monitor directories. When a directory is monitored, inotify will return events for the directory itself, and for files inside the directory. " + little bit of NLP (making a hash of most used words in a particular genre and comparing it with the hash obtained from using sample text online; comparing these hashes; to make it more comprehensive, analyzing grammatical constructs. All genres have a defining set of words and phraseology)

You cannot handle binary data, so make peace with that already.
 

honour

Redshirt
Local time
Tomorrow 12:41 AM
Joined
Dec 27, 2021
Messages
2
---
were you able to build the analytics engine? Good luck
 

Black Rose

An unbreakable bond
Local time
Today 2:41 PM
Joined
Apr 4, 2010
Messages
11,431
---
Location
with mama
f(f(1), 2)(2) = f(1, 2) + x

functions inside functions
 
Top Bottom