top of page

hallianAI

Features

Security

 

Your business takes data security seriously, and so do we. We’ve embedded world class security protocols into every aspect of HallianAI.

 

  • Two-Factor Authentication (2FA): Two-Factor Authentication adds an extra layer of security by requiring not only a password and username but also something that only the user has on them, such as a physical token or a mobile device. This significantly reduces the risk of unauthorized access, even if the password is compromised.

  • Data Encryption: Data encryption is the process of converting plaintext data into a coded form, known as ciphertext, which can only be read by someone who has the decryption key. This ensures that sensitive information remains confidential and secure from unauthorized access during storage and transmission.

  • Role-Based Access Control (RBAC): Role-Based Access Control restricts system access to authorized users based on their roles within an organization. This approach helps ensure that individuals can only access the information and resources necessary for their job functions, enhancing security and compliance.

  • Your Data stays within your firewalls: We install our application within your infrastructure. All data pipelines are built within your organization’s IT security infrastructure.

Role Based Access Control

 

Role-based access control (RBAC) is a security mechanism implemented within the HallianAI platform to control and manage user access based on their assigned roles. With RBAC, access to various resources and functionalities within the HallianAI platform is determined by the specific roles assigned to users. This approach ensures that users are granted appropriate permissions and privileges based on their job responsibilities or organizational roles.

 

RBAC enhances security by reducing the risk of unauthorized access and potential data breaches, while also simplifying the administration of user access rights. The administrators of the tool within your organization can effectively manage user permissions, maintain data integrity, and protect sensitive information.

​

How It Works

​

  1. Users are assigned to a role - this can be done via mass updates or by connecting to Active Directory.

  2. Roles are then given access to individual data sets (indexes).

  3. Users can only see the indexes allowed by their role(s).

​

Admin Features

 

System Message

​

In the context of Generative AI, a system message refers to a prompt or instruction provided to a language model or generative model to guide its output generation. When using Generative AI models, such as those used by the HallianAI Platform, a system message is used to set the context or specify the desired output from the model. It helps to guide the model's response and ensure that the generated content aligns with the intended purpose or objective.

​

 

For example, consider the business use case of a technical resource library. A system message would tell the tool that it is a chat agent for technical documents. It could give instruction on the tone of the chat, not to violate copyrights, and always give answers grounded in the documents available to it. By providing this system message to the Generative AI model, it sets the context and instructs the model to focus on generating a summary specifically use case The model can then utilize its knowledge and understanding of machine learning algorithms to generate a coherent and informative summary that captures the key points of the article.

​

 

System messages are crucial in Generative AI as they help shape the output and provide control over the generated content. By providing clear and specific system messages, users can influence the model's output and ensure that it aligns with their desired outcome.

​

Index Mapping to Roles

​

In the context of a RAG (Retrieval-Augmented Generation) generative AI model, an index refers to a data structure that stores and organizes information for efficient retrieval. The index plays a crucial role in the retrieval component of the RAG model.

​

 

To enable efficient retrieval, the RAG model utilizes an index. This index is built on the knowledge source and contains precomputed representations or embeddings of the documents or entities in the source. These representations capture the semantic information of the documents or entities, allowing for effective matching and retrieval during the generation process.

The index helps the RAG model quickly identify and retrieve the most relevant information from the knowledge source based on the given input or query.

​

 

The HallianAI Platform allows administrators to map roles to specific indexes. For example, imagine two indexes created for the Human Resources Department 1) The HRIS Personnel Index (which may include sensitive information for individuals) and 2) the HR Policy Index (publicly available policies). The HR Role may have access the HRIS Personnel Index (which may include sensitive information for individuals) as well as all of the HR Policy Index. The Engineering Role may have access to the HR Policy Index but would not be granted access to the HRIS Personnel Index.

​

 

This mapping greatly enhances security while also simplifying the administration of user access rights.

​

 

Admin Controls

​

Past Messages

Generative AI Tools use prior user inputs to provide continuity for the chat experience. HallianAI allows this to be modified using an intuitive user interface

​

 

Temperature

Define how creative you allow the LLM to be. Modifying can make the model more deterministic or more creative, depending on the particular use case.

​

 

Token Allowance

Generative AI models use “tokens” track usage of the LLM. Tokens refer to the individual units or elements that make up the input data. In natural language processing (NLP) tasks, tokens are typically words or subwords. These models process text by breaking it down into tokens and then generating or predicting the next token based on the context. Token usage is the primary driver of the variable cost of using Generative AI. The HallianAI Platform makes it simple for administrators to modify token limits to prevent unnecessary spend while also ensuring complete responses for the users.

LLM of Your Choice

​

Large Language Models (LLMs) can differ in several ways, including their architecture, training data, and specific applications. Here are some key differences:

  • Architecture: Different LLMs may use different neural network architectures. For example, GPT-3 uses a transformer architecture, which is highly effective for natural language processing tasks. Other models might use variations or entirely different architectures optimized for specific tasks.

  • Cost Considerations: Different LLMs use different quantities of tokens and have different usage fees.

  • Training Data: The quality, quantity, and diversity of the training data can significantly impact the performance of an LLM. Some models are trained on a wide range of internet text, while others might be trained on more specialized datasets to perform better in specific domains.

  • Size: The number of parameters in an LLM can vary widely. Larger models with more parameters generally have better performance but require more computational resources. For instance, GPT-3 has 175 billion parameters, making it one of the largest models available.

  • Fine-Tuning: Some LLMs are fine-tuned for specific tasks or industries. This means they are further trained on specialized datasets to improve their performance in particular areas, such as medical diagnosis, legal document analysis, or customer service.

  • Capabilities and Limitations: Different LLMs may excel in different areas. Some might be better at generating human-like text, while others might be more adept at understanding context or performing specific tasks like translation, summarization, or analytics.

  • Ethical Considerations: The development and deployment of LLMs also involve ethical considerations, such as bias in training data, potential misuse, and the environmental impact of training large models. Different organizations might prioritize these aspects differently, leading to variations in how LLMs are developed and used.

​

These differences can make some LLMs more suitable for certain applications than others. the HallianAI Platform allows organizations to find the right LLM for them and then put it to work on their own data.

Index Flexibility

​

In a Retrieval-Augmented Generation (RAG) generative AI model, like the HallianAI Platform, an index is a data structure that stores and organizes information for efficient retrieval. These indexes are built on the data sources and contains precomputed representations or embeddings of the documents or entities, capturing their semantic information.

 

Indexes are a crucial for the retrieval component of the HallianAI Platform, enabling it to quickly identify and retrieve the most relevant information from the knowledge source based on the given input or query.

 

The HallianAI platform allows for flexibility on the creation of indexes. This enables our team to work with your business to create the appropriate indexes for your particular use cases.

Chat History​

 

Having chat history in applications like the HallianAI Platform is crucial for maintaining continuity in conversations. It allows users to refer back to earlier points, preserving context and avoiding the need to repeat information. This is especially useful for complex or ongoing discussions, enhancing the overall user experience by making interactions more seamless.

 

Additionally, chat history serves as a valuable reference tool, enabling users to revisit past conversations to retrieve important information. This is beneficial in professional settings where accuracy is critical. Overall, chat history improves communication efficiency and adds convenience and reliability to the user experience.

bottom of page