Skip to content

Releases: tomkat-cr/genericsuite-app-maker

0.4.0

30 Jan 09:40
22c9af8
Compare
Choose a tag to compare

0.4.0 (2025-01-29)


New

Add the oTTomator Live Studio compatible GSAM Agent (as part of the oTTomator hackathon) [GS-166].
Add OpenRouter support [GS-166].
Add Pydantic AI support [GS-166].

0.3.0

29 Jan 12:29
0fc3eac
Compare
Choose a tag to compare

0.3.0 (2025-01-25)


New

Add AI/ML API provider and models [GS-55] [GS-156].
Add SUGGESTIONS_MODEL_REPLACEMENT parameter to avoid use the OpenAI reasoning models in the suggestions generation [GS-55].
Add SUGGESTIONS_DEFAULT_TIMEFRAME parameter to set the default timeframe to 48 hours for suggestions [GS-55].
Add LLM_MODEL_FORCED_VALUES parameter to set fixed values for models like o1-preview that only accepts temperature=1 [GS-55].
Add LLM_MODEL_PARAMS_NAMING parameter to rename the model parameters [GS-55].
Add DeepSeek-V3 model [GS-55].
Add titles to conversations, generated by AI [GS-55].

Changes

The suggestions generation prompt was enhanced to be one-shot, the suggestions {qty} was added, the {timeframe} token is replaced by the SUGGESTIONS_DEFAULT_TIMEFRAME parameter value and the application subject token {subject} was replaced by a more generic subject text [GS-55].
All prompts were enhanced locating the application subject at the end of each prompt [GS-55].
Restore the original question if App Ideation from prompt was used [GS-55].
User prompt were splited to have system prompts that configure the LLM model behavior [GS-55].

Fixes

Fix the error "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead." using OpenAI o1-preview/o1-mini models.
Fix the "unified" flag assignment in get_unified_flag() because it was returning False always [GS-55].

0.2.0

20 Nov 11:51
e0e2d9e
Compare
Choose a tag to compare

0.2.0 (2024-11-19)


New

Add ideation from a user's prompt in the App Ideation section [GS-154].
Add the "Generate App Ideas" button to the App Ideation section [GS-154].
Add the X AI Grok model [GS-157].
Add "timeframe" to ideation form [GS-154].
Add model advanced configuration (temperature, max. tokens, top P, frequency penalty, presense penalty) [GS-154].
Add "add_js_script()" to inject any JS code to streamlit [GS-155].

Changes

All prompts were enhanced.
LLM_PROVIDERS, TEXT_TO_IMAGE_PROVIDERS, and TEXT_TO_VIDEO_PROVIDERS list were converted to dict with all the providers having a list of "requirements" and "active" attributes [GS-154].

Fixes

Fix the error when the image does not exist [GS-154].
Fix errors with the OpenAI image generation [GS-154].

Breaks

LLM_PROVIDERS_COMPLETE_LIST config parameter removed. Instead the LLM_PROVIDERS list was converted to a dict with all the providers having a list of "requirements" and "active" attributes [GS-154].

0.1.0

11 Nov 03:32
Compare
Choose a tag to compare

0.1.0 (2024-11-10)


New

Add: ideation form to get the application description and other data, and generate names, database structure and PowerPoint presentations [GS-154].
Add: forms constructor and processor [GS-154].
Add: PowerPoint presentation generation [GS-154].

Fixes

Fix: the "You may be able to resolve this warning by setting model_config['protected_namespaces'] = ()" warning in the application startup [GS-154].
Fix: the "save_file() missing 1 required positional argument: 'content'" running the Code generator [GS-154].

0.0.2 (2024-11-09)


New

Add tabs to organize the multiple app and code generation options [GS-154].
Add use response as prompt feature [GS-154].
Add llamaindex embeddings index query on code generator [GS-154].
Add get_unified_flag() to configure providers and models that only accepts user messages, not system prompt, like o1-preview [GS-154].
Add prepare_model_params() to normalize the client and model parameters preparartion [GS-154].
Add LlamaIndexCustomLLM to abstract the llamaindex models with codegen LlmProvider [GS-154].
Add show_conversation_debug() to give the usert with detailled model responses [GS-154].

Changes

Main streamlit UI layout elements separated in different functions [GS-154].
All model type (text, video, image) uses the model configuration UI selection [GS-154].
"Invalid LLM/ImageGen/TextToVideo provider" detailed error [GS-154].
read_file() allows to save the read files in a local directory to allow llamaindex embeddings to read it [GS-154].

Fixes

Fix the way enhance prompt feature works, because it was not working properly with the absense of prompt text model [GS-154].

0.0.1 (2024-11-08)


New

Project started for the Llama Impact Hackathon [GS-154].
Add frontend UI in stramlit.io [GS-152].
Add code generation backend to create the Genericsuite JSON files and Tools Python code [GS-149].
Add video generation and follow-up data in the conversation database [GS-153].
Add image generation [GS-152].
Add the video gallery page [GS-153].
Add the image gallery page [GS-55].
Add MongoDB support [GS-152].
Add Together.AI support [GS-119].
Add Meta Llama models support [GS-119].
Add prompt enhancement support [GS-152].
Add data management pull down section in the side bar [GS-152].
Add import and export database items to JSON files [GS-152].
Add DatabaseAbstract class to normalize the database classes structure [GS-152].
Add initial version of the GS mini-library, chat feature, image generation [GS-149].
Add Streamlit UI library to normalize all Streamlit specific methods and include it in the codegen library [GS-55].
Add get_new_item_id() to normalize the new "id" creation [GS-152].
Add code generation results processing and save in conversations [SS-149].
Add configuration in JSON files [GS-55].
Add parameters values that can be read from a file path, e.g. "[refine_video_prompt.txt]" [GS-55].
Add suggestions user customizable prompt [SS-149].
Add conversations buffer to speed up the separated questions and content display [SS-149].
Add models selection [GS-55].
Add buttons to generate project ideas, names, presentation and video script [GS-55].