Is your approach to constructing the F/T dataset written up anywhere?
Thanks for sharing the model!
Is your approach to constructing the F/T dataset written up anywhere?
Thanks for sharing the model!
Finally, a question on this sub that is not about an “AI girlfriend” (ahem RP)
There are about a dozen + different ways to incorporate KGs into an LLM workflow with our without RAG. Some examples:
## Analyze user question, map it into KG nodes and extract connectivity links between them. Then put that info into the LLM prompt to better guide the answer.
Example: “Who is Mary Lee Pfeiffer’s son and what is he known for”? (b.t.w. try this on ChatGPT 3.5)
## Use KG for better RAG relevancy.
Example: Assume your KG is not about concepts but simply links paragraphs/chunks together. This could be simple as mining links like (see Paragraph X for more detail), Doing semantic similarity between chunks, putting in structural info like (chunk is part of Chapter X, Page Y), topic or concept -based connectivity between chunks.
Then, given a user query, find the most relevant starting chunk, Apply logic for what is “more relevant” from your application to figure out which other linked chunks to pull into the context. One simple hack, using node centrality or Personalized PageRank is to pull in chunks that are indirectly connected, but have high prominence in the graph
ms semantic kernel
You could start with either of the folowing:
- https://github.com/microsoft/semantic-kernel/pull/1357
Run ooba with the --api arg. Finish prototyping your code for the problem you wanted to solve, and then you could revisit the question of how to run inference natively within CLR.
This answer is somewhat OT, but may be the best answer for your situation. Take it from someone who started coding C# in 2001.
The worst mistake a Dev can make is call themselves “Im a ___ Dev”. This is an option limiting mental handicap.
Way back I sunk all my interest in the Semantic Web on porting Jena into NJena. Almost finished the conversion but never built anything useful.
For your problem, dockerize Ooba, llamacpp, etc exposing an api endpoint, call API via ms semantic kernel from your wpf app. Profit…
Better spend your time on learning containerisation then on coping with missing options in you chosen ecosystem.
Are we talking high stakes vs creative summarization here?
As usual, “the beauty is in the eye of the beholder”.
I think part of the point for these tests is to be able to solve these logical puzzles given all of the richness and ambiguity of NLs. We’ve had deterministic theorem solvers capable of solving these problems expressed as a closed set for decades.
That said, please see the capstone version of the prompt in the second update, which removes most of the ambiguity per the points you raised. It also removes the ‘singles’ aspect of tennis, which consistently trips up in-context reasoning, making the weaker LLMs think its a solo activity (despite an explicit following clarification).
Thank you, bud Mind trying the same prompt on the cheapo 3.5 model? I suspect it will hit it on the nail with your custom instructions, given that it was hit and miss for me with my weaker prompting judjitsu
The tuning for story telling does show :) Surprised it was only a guitar and not an erhu
This is a valid critique about the form of the riddle.
Most riddles rely on out of context prior knowledge to be used as a part of a deductive chain of reasoning. This one is not any different from the question about how many sisters one has that folks in this community use all the time.
Try same q with badminton
instead of chess
.
Then same with singles tennis
(which 3.5 answers as the sixth brother was playing doubles tennis :)…
I hope this thread wont descend into deliberation on whether it is possible to play the battleship game alone and how much fun it is :)
That is the promise. Of course, you still need to figure out for your app domain if doing a concept-level, chunk level, or some in-between option like CSKG is the right application.
One thing I find helpful with prompt design is to spend less attention on writing instructions, replacing them with specific examples instead. This replaces word-smithing with in-context learning samples. You build up the examples iteratively, running the same prompt through more text, fixing it and adding onto the example list… until you reach your context budget for the system prompt.