A Project That Builds a Bridge Between Stories and CinemaThis project is more than a technological system. It is a bridge, stretched between two worlds: the world of stories written in ink and silence, and the world of light, motion, sound, and breathing characters.It connects literary imagination with cinematic embodiment, as if words themselves open a portal into a frame.Where Text Becomes VisionEvery story — a fairy tale, a novel, a myth, or science fiction — enters the system as pure text.But what emerges is more than words.It becomes:• living scenes,• recognizable faces of characters,• movement, wind, fire,• music carrying emotional resonance,• voices that shape imagery,• and a visual sequence that can already be watched as if it were a trailer for a film that did not exist a moment ago — and suddenly does.Magic, Powered by TechnologyThe project brings together all modern models and technologies that exist today:• text-analysis models that understand stories deeper than a typical reader,• script-generation engines that craft precise dramaturgy,• visual models capable of creating frames in any chosen style,• identity-anchoring systems that guarantee consistent characters,• animation engines turning still images into moving scenes,• audio models that produce music, ambience, and voices,• a video pipeline that composes everything intoa cohesive flow — from short trailers to full mini-films.Text no longer remains flat.It becomes a living organism, moving, sounding, glowing, breathing.A Cartoon Growing Out of a ParagraphThe system can read a story and transform it into a sequence of scenes, then a chain of frames, and finally an animated breath of an entire world.Every character keeps their identity, every gesture retains its shape, every scene preserves its color magic.When an author writes: “He lifted his head and saw the star-path for the first time.”the platform itself creates:• camera plan,• angle,• expression,• starlight on the skin,• motion of the gaze,• a delicate musical cue,• and a micro-trailer a few seconds long.From Book to Frame. From Frame to Film.The project forms a continuous chain of transformation:1. Text → Analysis → Script2. Script → Storyboard → Frames → Scenes3. Scenes → Animation → Music → Dialogue4. Video → Editing → Trailer or FilmEvery step is automated, yet remains artistic, because the system is infused with style, color, and sensitivity.A Bridge That Cannot Be BrokenThe true power of this project lies in its universality: it can work with any story, in any genre, in any visual style, using every powerful model and technique available today.It turns authors into directors, readers into viewers, and stories into cinema.And it does so softly, seamlessly, as if this transformation had always been waiting to happen.
This project explores why revenue growth has slowed even though site traffic remains high. I built it as a learning project using real-world data to understand where the issue could be coming from.I cleaned and prepared the dataset in Python, ran deeper queries in SQL, and explored patterns through a series of analyses: monthly cohort retention, a conversion funnel split by traffic source, and RFM customer segmentation. I then brought the findings together in a Tableau dashboard and a written report.The cohort analysis helped me see how quickly engagement drops. Retention starts at about 39% in Month 0 and falls below 25% by Month 3–4. The funnel analysis showed that mobile brings in most visitors—around 90%—yet converts no better than web. The largest drop happened right after users added an item to their cart, which pointed me toward possible checkout confusion or hesitation.RFM segmentation added a different angle: about 20% of customers (Champions and Big Spenders) bring in most revenue, while more than half fall into Occasional or Lost categories. This helped me link the retention findings with customer value and understand where future improvements could be done.
A modern Flask-based web IDE with multi-agent AI assistance powered by LangChain and OpenAI. Features a VS Code-like interface with intelligent code assistance, debugging capabilities, and comprehensive development tools.FeaturesCore IDE Features- File Management: Full file browser with create, edit, delete, and organize capabilities- Monaco Editor: VS Code-style editor with syntax highlighting for 50+ languages- Workspace Management: Project-based workspace selection and management- Real-time Saving: Auto-save and manual save (Ctrl+S/Cmd+S) functionalityAI-Powered Assistance- Multi-Agent System: Specialized agents for different tasks:- Ask Agent: General Q&A and code explanations- Code Agent: Code generation, refactoring, and optimization- Debug Agent: Error analysis and debugging assistance- Orchestrator Agent: Intelligent task routing and coordination- Context-Aware Help: AI assistance that understands your project structure- Interactive Chat: Real-time conversation with AI assistantsAdvanced Development Tools- Debug Panel: Comprehensive debugging with:- Console log monitoring and filtering- API call tracking and analysis- Component state monitoring- Performance metrics and timing- Error tracking and analysis- UI Enhancements: Modern interface with resizable panels and responsive design- Help System: Interactive tour and contextual assistance
About This Project: Automated AI Players Behaviors AnalyzerThe Automated AI Players Behaviors Analyzer is a workflow I built to remove repetitive manual work from my daily analysis tasks and to create a more reliable, consistent way of reviewing betting patterns generated by the Automatad AI Player Players Behaviors analyzer. Instead of manually going through spreadsheets and producing reports, this automation handles the full journey - from raw data to a clean, ready-to-share PDF.1. Problem StatementIn my day-to-day work, I routinely go through hundreds of lines contained players bets. Reviewing and structuring these insights manually took valuable time and added unnecessary friction. The aim of this project was simple:Streamline the analysis workflowReduce repetitive tasksEnsure faster, more consistent reportingDeliver clean outputs to stakeholders automatically2. Workflow OverviewStep 1 — Data Upload to Google DriveThe process begins when a Google Sheet with player betting information is uploaded to a designated Google Drive folder. This keeps the entry point familiar and easy for the team.Step 2 — n8n Detects and Extracts the Datan8n serves as the central automation engine. It picks up the uploaded file instantly, extracts the relevant data, and prepares it for analysis. This eliminates manual handling and ensures a consistent data structure every time.Step 3 — Analysis via ChatGPT (OpenAI API)Once n8n formats the data, it sends it to ChatGPT through the OpenAI API.ChatGPT generates a structured analysis of the betting patterns and player behavior and returns the output directly in HTML format. Producing HTML at this stage ensures the final report is already well-formatted.Step 4 — HTML → PDF ConversionThe HTML analysis is then forwarded to a dedicated PDF conversion API. The result is a clean, professionally styled PDF that requires no additional editing.Step 5 — Delivery Through Slack BotThe finished PDF report is automatically delivered to the intended recipient via Slack.The Slack bot sends the file along with a short message summarizing the contents, ensuring the right person receives the analysis without any manual follow-up.Workflow DEMO video: https://www.youtube.com/watch?v=itLoZt4J38M3. Value DeliveredSignificant time savings — routine analysis that previously required manual review now runs automatically.Consistency and repeatability — every report follows the same structure and formatting.Reduced operational friction — no exporting, no formatting, no copy-pasting; everything flows end-to-end.Instant delivery — the recipient gets a polished PDF without needing to request or chase updates.Better focus on high-impact tasks — with repetitive work removed, more time is available for deeper evaluation, decision-making, and product improvements.4. SummaryThe Automated AI Players Behaviors Analyzer connects tools we already rely on—Google Drive, n8n, ChatGPT, PDF APIs, and Slack—into a single smooth pipeline. It transforms raw player betting sheets into structured reports automatically, making our workflow faster, cleaner, and far more efficient. This setup now supports my day-to-day responsibilities without slowing me down, and ensures that analysis reaches the right stakeholders in a consistent, polished format every time.
# Application Agent for cover letters ## Describtion of the Use Case ### Primary use The Application agents purpose is to automate the process if writing individual cover letters for job applications, based on the Curriculum Vitae (CV) data from a user and a provided jod ad. It archievs to generate a ready to go clean formated cover letter PDF, that clearly discribes how your experiences and skills match requirements and tasks from the job ad. This feature actually consits out of two agents, [Job ad extraction agent](#Job-ad-extraction-agent) and [Cover letter writing agent](#Cover-letter-writing-agent), that can be started independed or consecutively. ### Secondary uses [Extracting CV from LinkedIn](#LinkedIn-CV-extract-agent), for the primary feature (cover letter generation) it is necessary to provid the agent with Curriculum Vitae information like professional experience and educational background. An additional agents purpose is to extract this from your LinkedIn profile. The agent archievs to give you a good start, but some additional editing might be necessary to proceed with the cover letter agent. [Manuell editing](#Manuell-editing) is a feature that allows you to change and add all the Curriculum Vitae and personal data comfortable in the webapp. [Job Search Agent](#Job-search-Agent) is an additional agent, whichs purpose is to help to find job ads you want to apply to. It archies to search online in the database of the German **Agentur für Arbeit** and visualising the results in a table in the webapp. Necessay parameters for the search are a some kind of job title and a German city as location. Optional the search area as radius around the city center can be adjusted. ## Prerequisites ### Genral Prerequisites **OpenAI API Key** The Agent is using the OpenAI API, therefore an API key is needed. Store the API key as environment varaible `OPENAI_API_KEY` so the app will then load it automatically. **Configure your passwort** to secure your applaction the API keys and cookies, add an environment varaible `APPLICATION_AGENT_PW` with your passwort. You have to identify your self with that passwort in the streamlit app. **Docker** Docker need to be installed. **LinkedIn Cookie (optional)** To enable some features like CV extraction from LinkedIn a cookie is needed, so the app can log into your acount, without you sharing the passwort. To do so copy the cookie `li_at` from your browser with you logged in on LinkedIn. (Open the menu with F12) Store the cookie as environment varaible `LINKEDINCOOKIE`, so the app will then load it automatically.  ### Option to run the app in Docker No additional installations are necessary. ### Option to run the app locally direcly with uv Addtional installations are necessary. **Latex compiler** For pdf generation a Latex compiler is required install one like TexLive and make sure that the `luatex` binary is added to path. **UV** is needed to run the application, it also automatically takes care of the python dependencies. ````shell curl -LsSf https://astral.sh/uv/install.sh | sh ```` ## Starting the application agent from terminal ### Run with Docker First make sure all the [Genral Prerequisites](#Genral-Prerequisites) are taken care of. Because the project is not yet in a public registry availabel, you have to build it first locally. ````shell sudo docker build . --tag 'ro6in/application-agent:latest' ```` After that the container can be run with: ````shell sudo OPENAI_API_KEY=$OPENAI_API_KEY \ LINKEDINCOOKIE=$LINKEDINCOOKIE \ APPLICATION_AGENT_PW=$APPLICATION_AGENT_PW \ docker run -t -i \ -e "LINKEDINCOOKIE" \ -e "OPENAI_API_KEY" \ -e "APPLICATION_AGENT_PW" \ ro6in/application-agent ```` Or without LinkedIn features ````shell sudo OPENAI_API_KEY=$OPENAI_API_KEY \ APPLICATION_AGENT_PW=$APPLICATION_AGENT_PW \ docker run -t -i \ -e "OPENAI_API_KEY" \ -e "APPLICATION_AGENT_PW" \ ro6in/application-agent ```` ### Run uv locally First make sure all the [Prerequisites](#Prerequisites) (general and additional for running local) are taken care of. Then the app can be started with. ````shell uv run --with streamlit streamlit run main.py ```` ## How to use 1. After the webapp startup finished, type in your `APPLICATION_AGENT_PW` to enable the functionalities. 2. If you want to proceed you can now enable the main menu on the sidebar, this switch also gives you the option to always go back to this start page. 3. Next you should decide if you want to continue with one of the exitsting user profiles or want to create a new one. 4. After you switch to your user, you can decide on the next interaction interfaces, either those with the Agent workflows or to manuelly edit your Curriculum Vitae or personal data. * Agent workflows (for all agents the `OPENAI_API_KEY` is required) - **Cover Letter Agent**, agent to extract information from a job ad and write a cover letter based on that and your Curriculum Vitae. Therefore, you first need to provide your CV data either manuelly or start with the **LinkedIn CV extraction**. - **LinkedIn CV extraction**, agent to extract your Curriculum Vitae from your LinkedIn profile, `LINKEDINCOOKIE` is required for this feature. - **Job search Agent**, agent to search in the database of the German **Agentur für Arbeit** for job postings. * Manuell CV data editing - **CV style**, change the style and color of the CV and view it online. - **Upload data**, upload the `info.json` and `cv.json` file you might have stored locally form previous use of the app. - **Chronological items**, change or add chronological items like proffessional experience item or an educational background item. - **Project**, change or add projects you want to have in your CV. - **Skill**, change or add skills from your CV. - **Languages**, change or add language skills. * Edit personal data - **Personal data**, edit the personal data. ### General - On the sidebar there is an Error monitoring, after an error is caught it indicates it until logging files are deleted with the corresponding button. - Logging files can be downloaded as `.zip` file, but the download will be only prepared on demand to enhance performance. - Also, the personal data file `info.json` and `cv.json` can be downloaded as zip to store tha data local. - After Agents runs the tooken costs are visualized on the sidebar. - To prevent any agent to override your data, agents always work with the hidden `agent_user`. In the manuall editing interfaces you can switch between your personal user storage and the agents storage. By switching to the Agent interfaces your personal data is always copied to the `agent_user`, and in the agent interfaces you can always reload your personal data to the `agent_user`. ## Technical description ### Job ad extraction agent - For the Job extraction three different options to pass a job ad are possible. 1. Provide a ULR to a job posting, it has to be freely accessable, so also without login or similar. 2. Paste a full job ad as plain text. 3. Provide a job ad via LinkedIn, not as ULR but with the identification number, you can find the number in the URL or in the post itself. `LINKEDINCOOKIE` is required for this feature. - First the agent trys to load the job ad from the provided source. (web scraping for the URL, plain text doesn't need further loading, and for LinkedIn the linkedinscraper is utelized) - The content of the loaded job add is validated, the LLM judged if the content really is a job ad, otherwise the workflow will be terminated. - If the job ad was profied via URL the LLM is used to validate if the page titel from the web scraper is already the job title. - If there is yet no valid job title the LLm is used to find it in the job ad content. - Now the LLM should identfy the contact person to write the cover letter to, this also includes extraction of the company name and address. - The language is extracted from the job ads main content, sometimes mayor parts of the scraped webside are not the actual job ad and are in a different language. The purpose is that the cover letter should be in the same language as the job ad. - The task and requirements/qualifications are extraced from the content. - All the extracted data is store as json. - After execution, it's possible to review and manually change the job ad information. ```mermaid --- config: flowchart: curve: linear --- graph TD; __start__([__start__]):::first load_job_ad_web(load_job_ad_web) load_job_ad_plain_text(load_job_ad_plain_text) load_job_ad_linkedin(load_job_ad_linkedin) validate_job_ad_content(validate_job_ad_content) validate_meta_title(validate_meta_title) extract_job_title_from_content(extract_job_title_from_content) extract_recipient_for_cover_letter(extract_recipient_for_cover_letter) extract_language_from_job_ad(extract_language_from_job_ad) extract_relevant_info_from_job_ad(extract_relevant_info_from_job_ad) store_job_ad(store job ad) __end__([__end__]):::last __start__ -.-> __end__; __start__ -.-> load_job_ad_linkedin; __start__ -.-> load_job_ad_plain_text; __start__ -.-> load_job_ad_web; extract_job_title_from_content --> extract_recipient_for_cover_letter; extract_language_from_job_ad --> extract_relevant_info_from_job_ad; extract_recipient_for_cover_letter --> extract_language_from_job_ad; extract_relevant_info_from_job_ad --> store_job_ad; load_job_ad_linkedin --> validate_job_ad_content; load_job_ad_plain_text --> validate_job_ad_content; load_job_ad_web --> validate_job_ad_content; validate_job_ad_content -.-> __end__; validate_job_ad_content -.-> extract_job_title_from_content; validate_job_ad_content -.-> validate_meta_title; validate_meta_title -.-> extract_job_title_from_content; validate_meta_title -.-> extract_recipient_for_cover_letter; store_job_ad --> __end__; classDef default fill:#f2f0ff,line-height:1.2 classDef first fill-opacity:0 classDef last fill:#bfb6fc ``` ### Cover letter writing agent - For the Letter agent CV data has to be available, and job ad data have to be loaded previously, or it has to be run together with the Job ad extraction agent. - Best practice to have extensive information in the CV data, much more than one would have for a CV to send for an application - There are to differnt options on how the agent processes the data to write the letter paragraphs. 1. Simple. All the job ad data and relevant CV data is directly provided to the LLM, with to Prompt to write the paragraphs based on tha information. In this case the LLM has to filter, and match relevant information in one step, and also has to write the text. This results in results that sometime tends to be not that thruthful. 2. Adanced like a RAG. Experiences and projects are loaded into a chroma db. Then based on the task and requirements/qualifications the LLM generates a list of queries to search for in that db. With similarity search the queries are searched for in the CV, if similarity is close (below thresshold value), matches from the CV and ad content are created. Then the list of these matches is used to generate the paragraphs instead of raw CV and ad content. The results are much more relaible, less haluzination that a skill was part a professional experience when it actually wasn't. - The agent first generates the letter extensions like opening, closing, subject line and adress field based in the extraced language, company name, contact person, adress and job tite. - The letter paragraphs are generated with one of the schemas, for both LongTerm Memory stored personal user preferences are considered. (Like number of paragraphs...) - Then the cover letter is generated with LaTex. - After that the agent is interrupted and the cover letter is show with a feedback chat below. - If feedback is given the regeneration of the paragraphs is trigged providing the LLM with the feedback message, previous paragraphs and data from the choosen paragraph writing option. - If either `quit` or `exit` is part of the feedback message the feedback loop is terminated, as last step previous feedback messages are used to identify some general applicaple user preference for the paragraphs that are not ad specific, these are store as LongTerm Memory. - After that it is possible edit the elements of the letter manually and regenerate it without the LLM. The letter generation is not a tool meant to fool somebody, therefore a side note is placed with an explaination that it is AI generated, to make it more ethical. ```mermaid --- config: flowchart: curve: linear --- graph TD; __start__([__start__]):::first write_cover_letter_paragraphs(write_cover_letter_paragraphs) query_cv_data(query_cv_data) write_cover_letter_paragraphs_with_cv(write_cover_letter_paragraphs_with_cv) write_cover_letter_extensions(write_cover_letter_extensions) generate_cover_letter(generate_cover_letter) feedback_loop(feedback_loop) __end__([__end__]):::last __start__ --> write_cover_letter_extensions; feedback_loop -.-> __end__; feedback_loop -.-> write_cover_letter_paragraphs; feedback_loop -.-> write_cover_letter_paragraphs_with_cv; generate_cover_letter --> feedback_loop; query_cv_data --> write_cover_letter_paragraphs_with_cv; write_cover_letter_extensions -.-> query_cv_data; write_cover_letter_extensions -.-> write_cover_letter_paragraphs; write_cover_letter_paragraphs --> generate_cover_letter; write_cover_letter_paragraphs_with_cv --> generate_cover_letter; classDef default fill:#f2f0ff,line-height:1.2 classDef first fill-opacity:0 classDef last fill:#bfb6fc ``` ### LinkedIn CV extract agent - `LINKEDINCOOKIE` is required for this feature. - Provide the URL to your LinkedIn profile. - LinkedInScraper is utelized to scrape the data from the profile, logged in with the `LINKEDINCOOKIE`. Unfortunatly the packe is not well maintained, so the method has some limitations, common troubles are: - Several profession experince items are interpreted by the package as a single on. - For educational background the packes does not provide the dates. - General mixup of the entries in the dictionary of the package. - The agent now has to go through the experience items provided by LinkedInScraper, the full dictionary an entry is passed and newly sorted with the help of the LLM. The cleaning is fine information are passed to the intended keys, no haluzinations of the inforamtion not give or wrong placements. - The clearing the stations of eduaction background is done similar to the experience, but dates are never proived for educational items by the LinkedInScraper so semi random placeholder are created. - After the agents workflow the CV is generated based on the extraced data. ```mermaid --- config: flowchart: curve: linear --- graph TD; __start__([__start__]):::first Scrape_LinkedIn_Profile(Scrape LinkedIn Profile) Clean_Experience_Item(Clean Experience Item) Clean_Education_Item(Clean Education Item) __end__([__end__]):::last Clean_Experience_Item --> Clean_Education_Item; Scrape_LinkedIn_Profile --> Clean_Experience_Item; __start__ --> Scrape_LinkedIn_Profile; Clean_Education_Item --> __end__; classDef default fill:#f2f0ff,line-height:1.2 classDef first fill-opacity:0 classDef last fill:#bfb6fc ``` ### Job search Agent - In the chat the LLM can be asked to search for jobs. - It's necessary to provide some kind of job title that the LLM can use for a query, also mandetory is specify a city where to search. (optional a search area radius can be mentioned) - The database is from German **Agentur für Arbeit**, in generall are there only jobs in Germany listed and the job inforamtions title and so on are in German, except for jobs were the postings are also in English. Often Jobs like "Data Scientist", "AI Engineer" ... are not translated into German. - The agent first validates the user message. - After succes a tool for retrieving the job postings should be called by the LLM. If the application is run via `uv` a docker MCP server is started with it. The job serach tool is provided by that MCP server, which calls the corresponding API. If the app is already run in the docker container the tool function is used in that container. - An additional tool job scrape the personal job recommendations directly from LinkedIn is currently under development. - The results are processed. - After the agent is done job listings are visualised as a table in the webapp. Unfortunately the database rarely priveds the URL to the original job post, also detailed discriptions are not provided. - Therefore, currently there is no feature to directly start the cover letter agent from an item of the job listing table. But with company and job title it should be easy to search the original job posting and with that do cover letter generation. ```mermaid --- config: flowchart: curve: linear --- graph TD; __start__([__start__]):::first input_validation(input validation) Job_search_assistant(Job search assistant) tools(tools) process_results(process results) __end__([__end__]):::last Job_search_assistant -.-> __end__; Job_search_assistant -.-> tools; __start__ --> input_validation; input_validation -.-> Job_search_assistant; input_validation -.-> __end__; tools --> process_results; process_results --> __end__; classDef default fill:#f2f0ff,line-height:1.2 classDef first fill-opacity:0 classDef last fill:#bfb6fc ``` # AI Engineering Capstone Case 2: AI Agent for Task Automation ### Objective: Develop an AI agent capable of automating complex workflows such as data analysis, report generation, or customer support. Several complex agent workflows are developed. See [Describtion of the Use Case](#Describtion-of-the-Use-Case) ### Key Tools: LangChain, Python/JavaScript, OpenAI Function Calling, external APIs. Used key tools are LangChain, Python, OpenAI function calling (with self developed MCP), the self developed MCP calls an API ### Steps: 1. Agent Design: - The Agents purpose and capabilities is clearly defined in [Describtion of the Use Case](#Describtion-of-the-Use-Case). 2. Tool Integration: - The Agent is quiped with API calling, web scraping and database queries. [Technical description](#Technical-description) 3. Agent Execution: - One of the developed agents has a LongTerm Memory for user preferences. - Several of the developed agent consist out of consecutive steps with LLM calls to improve the results. 4. Interactive Prototyping: - There is an extensive and complex streamlit webapp, with many different visualization features and embeeded documentation. 5. Evaluation: - The agent is extensivly tested and based on the evaluation several enhancements are implemented. - Also, the cover letter from the agent are already used for application, that also already lead to an interview. 6. Documentation: - This RAEDME is a comprehensive report detailing functionality. Party of this documentation are also embeeded in the corresponding feature of the webapp. - Ethical considerations involve that AI generated content should be labelled. A sidemark label is always on the generated PDF. Alos, all AI handeld personal data is publicly available. # Future features ## Job ad search agent + Job recomendations from LinkedIn, this depends on linkedinscraper package, currently this feature from the package is not working. + Other job databases to get data from, then maybe direct option to generated application documents in one click, if availabe data is sufficient. ## CV change agent + Agent to reduce and rewrite the content of the CV to highlight content that aligns with the job ad and reduces that which not. + Also, a summary at the top of the CV could be generated. + Translation of the CV. ## CV upload agent + Agent that takes a CV as PDF and deconstructs the information into the CvData class so manuell adding all the information in the beginning is not necessary. **Diverse** * Filtering and validating all manuell entry. * Alternative LinkedIn Scraping, using login from the actual scraper and pass the full page content to filter agent. * Docker multicontainer system. Seperation of the features the archive more resiliance (restarting parts), better scalebility for deployment. * Proper login and authentification. * Some fixing of minor bugs and a list of open issues in the corresponding privat repository.
A semi-automated system that transforms how I manage, analyse and plan social media content. Built in Google Sheets with Apps Script, it replaces manual reporting with intelligent automation for LinkedIn and Instagram - pulling data, generating insights, and sparking new content ideas.
🧠 Project Goal
This capstone project brings automation, structure and intelligence to the way I manage, analyse and plan social media content. Previously done manually, this system now provides a semi-automated solution for collecting data, analysing performance, and generating new content ideas across LinkedIn and Instagram. It reduces friction in my workflow and enables faster insights, clearer reports, and more strategic content decisions.
🎯 Problem It Solves
Before this project, reporting was entirely manual:
Each post’s data had to be collected manually from the apps.
Monthly reports were assembled post by post, with no structure to pull historical captions.
Content planning was done in basic spreadsheet cells with no dynamic overview and no connection to performance
The process was time-consuming, error-prone and difficult to scale or learn from over time.
🔧 Solution Overview
This project uses Google Sheets + Apps Script to automate and streamline:
Monthly reporting for LinkedIn and Instagram
Post-level performance tracking
Content idea generation (from newsletters, news headlines and prompts)
Content planning calendar with status tracking
Dynamic dashboards and content performance ranking
All scripts and formulas are integrated in a way that feeds multiple views, from raw data to dashboards, without redundant manual work.
💻 Tech Stack
Google Sheets (structured for automation)
Google Apps Script (JavaScript-based scripting for automation and content generation)
Gmail API (used in Apps Script to pull newsletter data)
NewsAPI.org (for headline-based content inspiration)
Basic spreadsheet formulas & conditional formatting
No-code/low-code AI integration (for ideation and planning)
🔍 Key Features
LinkedIn Monthly Overview: Auto-populates high-level data, feeding a long-established LinkedIn Performance tab and LinkedIn Dashboard.
LinkedIn Post Weekly: Post-level data feeds into the main views, tracking trends and engagement.
Instagram Reporting: Structured the same way as LinkedIn - monthly and post-level, mostly automated.
🗓 Weekly Ideas Generator:
Pulls content ideas from two sources:
News headlines via NewsAPI
Relevant newsletters via Gmail search
Categorises into:
News-based
Newsletter
Wild card prompts
Assigns status, pillar and platform
📅 Content Planning Sheet:
Dropdowns for status, pillar, platform
Formula-based highlight for top 20% performers
Pulls from the calendar for a granular overview
📁 Idea Generator (Separate Doc):
Dropdowns for pillar and platform
Suggests hooks, captions, content types and hashtags
Library built from months of idea curation
📈 Outcome
The current setup drastically reduces manual work and improves consistency in tracking and planning. It enables:
Weekly and monthly insights with minimal effort
Data-driven decisions for content creation
An expandable system that I can adapt to new platforms
I also learned:
How to write and debug Google Apps Scripts
How to structure Sheets for multi-tab dependencies
How AI can enhance ideation without replacing strategic judgement
✅ Testing and Ongoing Use
This system has been actively used and tested in my own workflow over the past months. I have iteratively improved it based on real needs and feedback from using it daily. It continues to evolve - I consider it a living tool rather than a finished product. As my content needs and platforms change, I plan to keep upgrading it.
Healthmate is a revolutionary AI-powered medical assistant platform that transforms healthcare
information access and patient care coordination. Built with cutting-edge technologies including
LangGraph AI agents, document intelligence (RAG), and integrated scheduling, Healthmate addresses
critical gaps in medical information retrieval, patient-doctor communication, and healthcare workflow
management.
The Bilingual Content Creator Agent is a LangChain-based application designed to generate and revise professional content in both English and Lithuanian. It supports multiple content formats such as:
✍️ Blog posts
📢 Social media updates
🎬 Video scripts
📧 Newsletters
StockTalk is your AI-powered investing companion. It replicates the common steps investors take - tracking top movers, spotting the most talked-about stocks, checking sentiment, and analyzing trends - and brings them all into one interactive app. Just ask questions, dive into real-time data, and get AI-driven insights that make investing more engaging.
Features:
📈 Market Overview – Get insights into major indices like S&P 500, Nasdaq, and Dow Jones.
📊 Stock & CEO Analysis – Fetch company and CEO insights, including sentiment analysis.
🔥 High Volatility Stocks – View the most volatile stocks over the past week.
🔎 Real-time News Analysis – Get the latest stock-related news and sentiment insights.
💬 AI Investment Chatbot – Ask questions about stocks, market trends, and investment strategies.
📥 Export Chat History – Save your chat conversation as JSON or PDF.
📄 Investment Calculator – Calculate future investment returns based on compound interest.
Step into the museum of the future. This AI-powered virtual guide lets visitors explore exhibits by asking questions instead of reading long labels. Choose an artifact, type your question in Lithuanian or English, and get a short, clear answer.
Beyond single exhibits, the guide can compare related items, place them on a timeline, and provide historical context. Powered by Retrieval-Augmented Generation (RAG) with OpenAI and Pinecone, it blends curated museum data with the model’s knowledge for grounded, contextual answers.
Perfect for museums that want to make collections more engaging and accessible, this prototype shows how visitors could scan a QR code or browse a gallery view to unlock interactive, conversational learning.
Transform your shipping documents into intelligent insights in seconds!
This AI-powered system automatically extracts, validates, and analyzes transportation documents (orders, invoices, delivery confirmations) using advanced multi-agent technology.
Simply upload your PDFs and watch as it:
Instantly extracts structured data from any logistics document
Validates routes with Google Maps integration and distance calculations
Checks pricing against your agreements for instant compliance verification
Provides analytics on customer patterns and processing performance
Learns and adapts to improve accuracy over time
Perfect for logistics companies, freight forwarders, and transportation providers who want to eliminate manual data entry, catch pricing discrepancies, and gain real-time insights into their operations.
Try it with real PDF examples and see the magic happen!
Built with cutting-edge AI agents, Google Maps integration, and a beautiful web interface - it's like having a team of logistics experts working 24/7 on your documents.
A comprehensive AI-powered platform that combines cutting-edge video translation technology with an intelligent story writing assistant. Transform videos across languages while maintaining perfect audio synchronization, and create compelling stories through an AI-guided 5-step creative process.
The Translation Tool That Learns From You
Stop correcting the same translation mistakes over and over. With TranslatePrompt, you teach the tool your preferences, and it remembers them forever.
How it works:
Translate: Get your initial translation.
Refine: Easily correct any term to fit your context (e.g., change "beer" to "pint").
Automate: Your correction is instantly saved to your personal glossary.
Perfect: The next time you translate, your custom terms are applied automatically.
The result is faster, smarter, and perfectly consistent translations every time.
CommitDigest automatically turns your GitHub commits into simple, human-readable summaries. Instead of digging through endless Git logs, you get clear AI-generated reports delivered on your schedule—via Slack, Discord, email, or webhooks. It saves developers time, keeps teams aligned, and helps everyone understand progress without manual updates.
AspirePath Navigator is an AI-powered career analysis platform that helps professionals understand their career trajectory, automation risks, and upskilling opportunities. The platform provides personalized insights by analyzing LinkedIn profiles and resumes.