ARTIFICIAL INTELLIGENCE DISSERTATION

Historical Foundations, Methodologies, Applications, and Future Directions

Introduction

Artificial Intelligence (AI) is widely regarded as one of the most transformative technological paradigms of the twenty-first century. While the term evokes the notion of machines performing tasks that would require human intelligence, it encompasses a complex landscape of technical, philosophical, and societal considerations. This dissertation provides a comprehensive account of AI, tracing its historical roots, technical foundations, contemporary applications, and potential future trajectories.

AI research has experienced nonlinear progress. Early optimism based on symbolic reasoning and logic gave way to periods of stagnation, the so-called “AI winters.” The late twentieth and early twenty-first centuries witnessed a resurgence driven by computational advances, algorithmic innovations, and unprecedented data availability. Today, AI permeates fields such as natural language processing, robotics, healthcare, and finance, raising both opportunities and ethical, societal, and regulatory questions.

Historical Foundations of Artificial Intelligence

The history of artificial intelligence is a complex interplay of philosophical inquiry, mathematical formalism, and experimental engineering. To understand contemporary artificial intelligence, one must first appreciate that the aspiration to create machines capable of thought is centuries old, predating the development of modern computing. At its core, artificial intelligence emerges from a combination of two intertwined questions: what is intelligence? and can it be instantiated in non-biological substrates? These questions have long occupied philosophers, logicians, and mathematicians alike.

The intellectual roots of artificial intelligence can be traced to early reflections on the nature of human cognition. Aristotle (384–322 BCE), for example, developed formal logic systems that attempted to codify reasoning through syllogistic structures. While Aristotle’s framework was primarily aimed at understanding human thought, it laid the foundation for later symbolic approaches to artificial intelligence, which attempt to encode knowledge in logical structures that a machine can manipulate algorithmically.

During the Renaissance and Enlightenment, thinkers such as René Descartes and Gottfried Wilhelm Leibniz entertained mechanistic models of human cognition. Descartes, in Treatise on Man (1662), proposed that animals operate as automata governed by mechanical principles, thereby implicitly raising the question of whether humans themselves could, in principle, be simulated mechanically. Leibniz envisioned a characteristica universalis, a universal symbolic language through which reasoning could be formalised, foreshadowing symbolic logic that would become central to early artificial intelligence.

By the nineteenth century, the formalisation of mathematics and logic accelerated. George Boole’s Laws of Thought (1854) provided the first algebraic system for logical reasoning, offering a template for the computational manipulation of propositions. Concurrently, Charles Babbage and Ada Lovelace explored mechanical computation through the Analytical Engine. Lovelace, in particular, presciently suggested that machines could “weave algebraic patterns” beyond mere numerical calculation, hinting at the creative potential of computation—a concept that would resonate deeply in the artificial intelligence era.

Modern artificial intelligence is inextricably linked to the work of Alan Turing. In 1936, Turing introduced the concept of a universal computing machine, now known as the Turing Machine, which formalised the idea that computation could be abstractly defined independent of physical implementation. This insight established the theoretical feasibility of machines performing any computable function, providing a rigorous foundation for the later development of artificial intelligence.

Turing’s seminal 1950 paper, Computing Machinery and Intelligence, explicitly addressed the question, “Can machines think?” Rather than attempting to define intelligence in philosophical terms, Turing proposed a pragmatic criterion, the Imitation Game—now widely known as the Turing Test. A machine passes this test if it can converse with a human interlocutor in such a manner that the human cannot reliably distinguish it from another human. This operational approach set a precedent for artificial intelligence research: intelligence could be studied empirically through observable behaviour rather than through metaphysical definitions.

Turing’s vision was bold, yet rooted in rigorous technical insight. He anticipated several issues that would become central to artificial intelligence: the necessity of learning from experience, the potential for machines to simulate neural structures, and the conceptual distinction between intelligence and consciousness. Importantly, his work also highlighted the limits of computation, foreshadowing debates about decidability, complexity, and the feasibility of strong artificial intelligence.

The formal birth of artificial intelligence as a research discipline occurred at the Dartmouth Summer Research Project on Artificial Intelligence in 1956, organised by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. The Dartmouth proposal articulated a vision in which “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” This optimism catalysed a wave of research focused on symbolic artificial intelligence, also referred to as Good Old-Fashioned AI (GOFAI).

Symbolic artificial intelligence emphasised the manipulation of explicit knowledge representations using logical rules. Early programs such as Newell and Simon’s Logic Theorist (1955) and General Problem Solver (1957) demonstrated that computers could, in limited domains, emulate human reasoning by applying formal rules to symbolic representations of knowledge. These systems exemplified the deductive, top-down approach in which intelligence was equated with the systematic application of logic.

During the 1960s and 1970s, AI research expanded into domains such as game playing, theorem proving, and natural language understanding. Programs like ELIZA (Weizenbaum, 1966) illustrated the potential for machines to simulate aspects of human conversation through simple pattern matching. Expert systems, which codified domain-specific knowledge into rules, became a central focus of industrial AI applications, particularly in diagnostics and engineering.

However, this period of optimism was tempered by practical limitations. Symbolic systems struggled with combinatorial explosion in real-world domains, requiring exhaustive searches that were computationally infeasible. Furthermore, the brittleness of rule-based systems highlighted a key limitation: intelligence cannot be fully captured by static rules alone. This recognition set the stage for new approaches grounded in learning and statistical inference.

The initial exuberance of artificial intelligence research eventually encountered severe constraints. Two periods, commonly referred to as artificial intelligence winters, marked phases of reduced funding and institutional scepticism. The first, following the Lighthill Report in the United Kingdom (1973), criticised artificial intelligence research for failing to deliver on its ambitious promises. The United States experienced a similar retrenchment in the late 1970s and 1980s, exacerbated by the limitations of expert systems and their high maintenance costs.

Despite these setbacks, the theoretical foundations of artificial intelligence were not abandoned. The 1980s saw the emergence of connectionist approaches inspired by neuroscience. Neural networks, first proposed in the 1940s by McCulloch and Pitts, experienced a renaissance with the development of the backpropagation algorithm (Rumelhart, Hinton, and Williams, 1986), enabling efficient training of multilayer networks.

The 1990s and early 2000s witnessed a further resurgence driven by three converging factors:

  1. Algorithmic innovation, including probabilistic models and machine learning frameworks;
  2. Exponential increases in computational power consistent with Moore’s Law;
  3. Explosion of digital data enabling large-scale statistical learning.

This period set the stage for the contemporary era of artificial intelligence, characterised by deep learning, large-scale data analytics, and increasingly autonomous systems. Tasks that once seemed intractable—image recognition, natural language translation, and superhuman game playing—became achievable. The historical trajectory from philosophical speculation to symbolic logic, through winters of scepticism, and finally to the modern era illustrates a recurring pattern: progress in artificial intelligence depends not only on theoretical insight, but also on technological, economic, and sociocultural conditions.

Core concepts and methodologies

Artificial Intelligence, as a field, encompasses a wide variety of approaches, each reflecting different assumptions about what intelligence is and how it might be realised in machines. These approaches have evolved historically from purely symbolic logic to statistical learning and neural-inspired systems. In this section, we explore the core methodologies of artificial intelligence, providing both conceptual explanations and technical overviews.

Symbolic artificial intelligence, or Good Old-Fashioned AI (GOFAI), represents intelligence as manipulation of symbols according to formal rules. The central assumption is that human reasoning can be modelled as operations on abstract symbols, for example, facts, concepts, or propositions in a deterministic manner.

A classic example is a rule-based expert system, where knowledge is encoded as “if-then” rules. For instance, in medical diagnostics, a rule might state:

          IF patient has symptom X AND symptom Y, THEN suggest diagnosis Z.
        

The system then applies logical inference to these rules to derive conclusions. Early systems such as the Logic Theorist(Newell and Simon, 1956) and General Problem Solver (Newell and Simon, 1957) demonstrated that symbolic manipulation could emulate certain aspects of human problem-solving.

Despite its initial success, symbolic artificial intelligence suffers from combinatorial explosion: the number of possible inferences grows exponentially with the number of rules. Moreover, encoding tacit knowledge, the kind of intuitive, experiential knowledge humans acquire naturally proved extremely difficult. Consequently, while symbolic artificial intelligence remains valuable in constrained domains, it is ill-suited for handling the uncertainties and complexities of real-world data.

In response to the limitations of symbolic artificial intelligence, researchers developed statistical approaches, collectively known as machine learning (ML). Rather than relying solely on manually coded rules, ML systems learn patterns from data. The basic idea is that intelligence can be approximated by observing input-output relationships and optimising predictions.

Machine learning can be categorised as follows:

  1. Supervised learning: The system learns a mapping from inputs to labelled outputs. For example, a neural network might learn to classify images as cats or dogs based on a labelled dataset.
  2. Unsupervised learning: The system identifies hidden structures in data without labelled examples, such as clustering customers into behavioural segments.
  3. Reinforcement learning: Agents learn to make sequences of decisions by receiving feedback in the form of rewards or penalties, akin to trial-and-error learning.

Machine learning relies heavily on probabilistic modelling, reflecting the recognition that real-world data is noisy and uncertain. Bayesian methods, Markov models, and more recently, deep neural networks, provide the mathematical machinery to learn from and reason under uncertainty.

Neural networks are computational models inspired by the structure and function of biological neurons. A simple artificial neuron computes a weighted sum of its inputs, passes it through a non-linear activation function, and propagates the result to subsequent layers. Networks of such neurons can approximate complex functions, a principle formalised in the Universal Approximation Theorem.

The resurgence of neural networks in the 21st century, commonly called deep learning has been driven by three key factors:

  1. Availability of large datasets: Modern artificial intelligence models thrive on massive amounts of labelled and unlabelled data.
  2. Computational power: Graphics Processing Units (GPUs) enable efficient training of multi-layer networks.
  3. Algorithmic innovation: Techniques such as convolutional neural networks (CNNs) for images and transformers for sequential data have dramatically improved performance.

Deep learning models, particularly transformer architectures, have revolutionised natural language processing (NLP). Large language models, such as GPT, BERT, and their successors, leverage attention mechanisms to capture contextual relationships across vast corpora of text. This allows artificial intelligence systems to generate coherent, contextually relevant outputs in tasks ranging from translation to summarisation.

Reinforcement learning (RL) is inspired by behavioural psychology. An agent interacts with an environment, performing actions that maximise cumulative reward over time. RL has been instrumental in achieving superhuman performance in complex tasks such as Go (AlphaGo) and real-time strategy games.

Formally, RL problems are framed as Markov Decision Processes (MDPs), defined by states, actions, transition probabilities, and reward functions. Modern advances combine deep learning with RL called Deep Reinforcement Learning (DRL) allowing agents to handle high-dimensional sensory input, such as raw pixel data from games.

Other emerging paradigms include:

  • Generative models (e.g., GANs, diffusion models), capable of producing realistic images, audio, and text.
  • Multimodal artificial intelligence, integrating vision, language, and other sensory modalities to perform richer reasoning.
  • Neuro-symbolic artificial intelligence, blending statistical learning with symbolic reasoning to achieve more interpretable and robust intelligence.

These methods reflect a broader shift: modern artificial intelligence increasingly emphasises adaptivity, statistical reasoning, and integration across modalities, departing from the rigid, rule-based approaches of early symbolic artificial intelligence.

Current applications of artificial intelligence

Artificial Intelligence is no longer an abstract theoretical pursuit; it is embedded in the fabric of contemporary society. Its applications span sectors as diverse as language processing, vision, autonomous robotics, healthcare, finance, and beyond. This section surveys the most prominent deployments of artificial intelligence, illustrating both technical capabilities and the social implications of widespread adoption.

Natural Language Processing (NLP) concerns the interaction between computers and human language. Early NLP relied on rule-based systems and symbolic approaches, such as ELIZA (Weizenbaum, 1966), which simulated conversation through pattern matching. However, contemporary NLP is dominated by statistical and deep learning models, particularly transformers.

Transformer architectures, exemplified by models like BERT (Devlin et al., 2019) and GPT (Radford et al., 2019), utilise self-attention mechanisms to capture relationships across sequences of text, allowing for nuanced understanding and generation of language. These models can perform a wide range of tasks, including translation, summarisation, question-answering, and content generation.

Beyond technical capability, NLP applications raise pressing societal questions. Large language models (LLMs) can inadvertently reproduce biases present in training data, propagate misinformation, and generate content that challenges traditional notions of authorship and accountability. Addressing these challenges requires interdisciplinary research encompassing ethics, policy, and computational linguistics.

Computer vision seeks to enable machines to perceive and interpret visual information. Early systems focused on edge detection and template matching, while modern approaches rely heavily on deep convolutional neural networks (CNNs). CNNs are particularly adept at extracting hierarchical features from images, allowing for tasks such as image classification, object detection, and facial recognition.

Applications of computer vision are widespread: autonomous vehicles use vision systems for navigation and obstacle detection, healthcare systems employ imaging analysis for diagnostic support, and surveillance systems monitor public spaces. The accuracy of modern models often surpasses human performance in narrow domains; for example, CNNs can detect diabetic retinopathy from retinal scans with remarkable precision.

However, computer vision also introduces ethical concerns, particularly regarding privacy, surveillance, and algorithmic bias. Facial recognition technologies, for instance, have been shown to perform less accurately on underrepresented demographic groups, raising questions about fairness and accountability.

Robotics represents a domain in which artificial intelligence interacts with the physical world. Early industrial robots performed repetitive, pre-programmed tasks, but modern autonomous systems integrate perception, reasoning, and decision-making to operate in dynamic environments.

Examples include:

  • Autonomous vehicles, which combine LIDAR, radar, cameras, and artificial intelligence algorithms to navigate complex traffic environments.
  • Service robots, such as warehouse automation systems, which optimise logistics through AI-driven task allocation.
  • Social robots, designed for interaction with humans in education, healthcare, and companionship contexts.

Autonomous robotics demonstrates the integration of multiple artificial intelligence methodologies: perception via computer vision, planning and decision-making via reinforcement learning, and interaction via NLP systems. These systems also pose novel regulatory challenges, as errors or failures can result in physical harm.

Medicine: Artificial intelligence has transformed diagnostics, treatment planning, and drug discovery. Deep learning models analyse medical imaging for early detection of diseases such as cancer, while predictive analytics optimise hospital resource allocation. Furthermore, generative models accelerate drug discovery by predicting molecular interactions and simulating chemical properties.

Finance: Artificial intelligence-driven algorithms underpin high-frequency trading, fraud detection, and credit scoring. Predictive models assess market risks, while anomaly detection systems identify suspicious transactions. The adoption of artificial intelligence in finance introduces questions of transparency, systemic risk, and accountability, particularly when models operate autonomously.

Industry and Manufacturing: Artificial intelligence optimises supply chains, predictive maintenance, and quality control. Sensors coupled with artificial intelligence models predict equipment failures before they occur, reducing downtime and operational costs. Generative design systems explore vast solution spaces to propose innovative engineering designs, accelerating product development.

Across these sectors, artificial intelligence enhances efficiency, accuracy, and scalability, yet also necessitates careful consideration of human oversight, ethical deployment, and societal impact.

As artificial intelligence pervades society, the ethical dimension becomes increasingly prominent. Key considerations include:

  • Bias and fairness: Artificial intelligence systems reflect biases in training data, potentially amplifying social inequalities.
  • Transparency and explainability: Complex models, especially deep learning systems, are often opaque (“black boxes”), challenging accountability.
  • Privacy and surveillance: Artificial intelligence-enabled monitoring can intrude on individual freedoms, particularly in public spaces and online platforms.
  • Economic and labour impacts: Automation threatens to displace jobs, necessitating policy responses and workforce retraining.
  • Autonomy and governance: Decisions made by artificial intelligence systems, particularly in high-stakes domains such as healthcare, finance, and military applications, require robust regulatory oversight.

These challenges have led to the emergence of artificial intelligence ethics frameworks, both at institutional and governmental levels, emphasising responsible innovation, inclusivity, and human-centric design.

Conclusion

Artificial Intelligence has evolved from a philosophical curiosity to a transformative technological force that permeates modern society. The trajectory of artificial intelligence; from early formal logic and symbolic systems, through periods of optimism and scepticism, to contemporary deep learning and autonomous systems reflects a recurring dialectic between conceptual ambition, technical feasibility, and societal context.

Historically, artificial intelligence emerged from human attempts to understand intelligence itself. Thinkers such as Aristotle, Leibniz, and Turing laid the conceptual and formal foundations, establishing that reasoning could, in principle, be mechanised. The mid-twentieth century witnessed the birth of artificial intelligence as a formal research discipline, characterised initially by symbolic approaches and expert systems. While these early methods demonstrated the potential for computational reasoning, they also exposed the limitations of rule-based intelligence, leading to periods of reduced enthusiasm the so-called artificial intelligence winters.

Contemporary artificial intelligence, by contrast, has been propelled by data-driven and statistical methodologies, exemplified by machine learning, deep neural networks, and reinforcement learning. These techniques have enabled machines to perceive, reason, and act with unprecedented accuracy in narrowly defined domains. Applications span natural language processing, computer vision, robotics, medicine, finance, and industrial operations, profoundly reshaping both economic and social landscapes.

Yet, as Section 4 highlighted, the deployment of artificial intelligence is not purely technical; it carries ethical, societal, and regulatory implications. Issues of bias, privacy, accountability, and workforce displacement necessitate robust governance frameworks and human-centred design. The interplay between capability and responsibility underscores a central theme of this dissertation: artificial intelligence is as much a social and philosophical challenge as it is a technical one.

Looking to the future, artificial intelligence research is oriented towards greater generality, integration, and interpretability. Cognitive architectures, neuro-symbolic models, multimodal learning, and quantum-assisted computation represent the frontier of research. Simultaneously, interdisciplinary collaboration across computer science, neuroscience, ethics, law, and economics will be critical to ensure that AI develops in alignment with human values and societal needs. The prospect of general artificial intelligence or more autonomous systems raises both extraordinary opportunities and profound challenges, requiring careful stewardship.

In reflecting on this evolution, one is reminded of the spirit of Richard Feynman: understanding complex phenomena requires not only mastery of technical details but also the ability to explain them clearly and insightfully. Artificial intelligence is a field that demands exactly this combination rigorous mathematics and computation, coupled with deep conceptual clarity.

In conclusion, artificial intelligence stands at the intersection of technology, cognition, and society. Its history teaches us that progress is neither linear nor guaranteed; its current applications demonstrate the transformative potential of computational systems; and its future trajectories remind us that intelligence, whether biological or artificial, must be pursued with both curiosity and caution. As we continue to build systems that learn, reason, and act, the challenge is not merely to create machines that are intelligent, but to ensure that this intelligence serves the broader goals of humanity.

Bibliography

  • Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C., and Qin, Y. (2004) ‘An Integrated Theory of the Mind’, Psychological Review, 111(4), pp. 1036–1060.
  • Aristotle (1984) The Complete Works of Aristotle, Vol. 1, ed. Jonathan Barnes, Princeton: Princeton University Press.
  • Autodesk (2018) Generative Design in Manufacturing: White Paper, Autodesk Press.
  • Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021) ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’, FAccT, pp. 610–623.
  • Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., and Lloyd, S. (2017) ‘Quantum Machine Learning’, Nature, 549, pp. 195–202.
  • Bishop, C. M. (2006) Pattern Recognition and Machine Learning, New York: Springer.
  • Boole, G. (1854) An Investigation of the Laws of Thought, London: Walton and Maberly.
  • Brynjolfsson, E. and McAfee, A. (2014) The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, New York: W. W. Norton & Company.
  • Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, Conference on Fairness, Accountability, and Transparency, pp. 77–91.
  • Crevier, D. (1993) AI: The Tumultuous History of the Search for Artificial Intelligence, New York: BasicBooks.
  • Descartes, R. (1662) Treatise on Man, Paris.
  • Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019) ‘BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding’, NAACL-HLT, pp. 4171–4186.
  • European Commission (2021) Proposal for a Regulation on Artificial Intelligence (AI Act), Brussels.
  • Feigenbaum, E. A. and McCorduck, P. (1983) The Fifth Generation, Reading, MA: Addison-Wesley.
  • Garcez, A. S. d., Lamb, L. C., and Gabbay, D. M. (2009) Neural-Symbolic Cognitive Reasoning, Springer.
  • Goertzel, B. and Pennachin, C. (2007) Artificial General Intelligence, Berlin: Springer.
  • Goodfellow, I., Bengio, Y., and Courville, A. (2016) Deep Learning, MIT Press.
  • Gulshan, V. et al. (2016) ‘Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs’, JAMA, 316(22), pp. 2402–2410.
  • Jobin, A., Ienca, M., and Vayena, E. (2019) ‘The Global Landscape of AI Ethics Guidelines’, Nature Machine Intelligence, 1, pp. 389–399.
  • Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012) ‘ImageNet Classification with Deep Convolutional Neural Networks’, Advances in Neural Information Processing Systems, 25, pp. 1097–1105.
  • Laird, J. E. (2012) The Soar Cognitive Architecture, Cambridge, MA: MIT Press.
  • LeCun, Y., Bengio, Y., and Hinton, G. (2015) ‘Deep Learning’, Nature, 521(7553), pp. 436–444.
  • Leibniz, G. W. (1679) A New Method for Learning and Reasoning, Hanover.
  • Lovelace, A. (1843) ‘Notes on the Analytical Engine’, Scientific Memoirs, 3, pp. 666–731.
  • MacQueen, J. (1967) ‘Some Methods for Classification and Analysis of Multivariate Observations’, Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, 1, pp. 281–297.
  • McCarthy, J., Minsky, M. L., Rochester, N., and Shannon, C. E. (1956) ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’, Dartmouth College.
  • McCulloch, W. S. and Pitts, W. (1943) ‘A Logical Calculus of Ideas Immanent in Nervous Activity’, Bulletin of Mathematical Biophysics, 5, pp. 115–133.
  • Mnih, V. et al. (2015) ‘Human-Level Control Through Deep Reinforcement Learning’, Nature, 518(7540), pp. 529–533.
  • Newell, A. and Simon, H. A. (1956) ‘The Logic Theory Machine’, IRE Transactions on Information Theory, 2(3), pp. 61–79.
  • Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I. (2019) Language Models are Unsupervised Multitask Learners, OpenAI.
  • Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986) ‘Learning Representations by Back-Propagating Errors’, Nature, 323(6088), pp. 533–536.
  • Silver, D. et al. (2016) ‘Mastering the Game of Go with Deep Neural Networks and Tree Search’, Nature, 529(7587), pp. 484–489.
  • Sutton, R. S. and Barto, A. G. (2018) Reinforcement Learning: An Introduction, 2nd edition, MIT Press.
  • Turing, A. M. (1936) ‘On Computable Numbers, with an Application to the Entscheidungsproblem’, Proceedings of the London Mathematical Society, Series 2, 42, pp. 230–265.
  • Turing, A. M. (1950) ‘Computing Machinery and Intelligence’, Mind, 59(236), pp. 433–460.
  • Vaswani, A. et al. (2017) ‘Attention is All You Need’, Advances in Neural Information Processing Systems, 30, pp. 5998–6008.
  • Weizenbaum, J. (1966) ‘ELIZA A Computer Program for the Study of Natural Language Communication Between Man and Machine’, Communications of the ACM, 9(1), pp. 36–45.
  • Zhavoronkov, A. et al. (2019) ‘Deep Learning Enables Rapid Identification of Potent DDR1 Kinase Inhibitors’, Nature Biotechnology, 37, pp. 1038–1040.

This website is owned and operated by X, a trading name and registered trade mark of
GENERAL INTELLIGENCE PLC, a company registered in Scotland with company number: SC003234