I thought GPT5 and the other models used in ChatGPT are decoder only model, I see the diagram has a encoder/decoder. Also, during inferencing tokenization, prefill and decoding are the three main phases. It would have been great if the blog covered those concepts. I somehow felt that the steps mentioned here may not be exactly what chatgpt does when a request is sent to it, but I would like to get myself corrected if my observation is incorrect.
ChatGPT dropped a new product called Agent Builder last week and I couldn’t wait to jump on the bandwagon and play around with it.
I tested it out by wrapping my API endpoint inside a custom MCP server, then hooked it up as a custom server in ChatGPT’s new Agent Builder, right inside the drag-and-drop canvas.
While connecting my own custom MCP server, I ran into a few little gotchas that you’ll definitely want to know about if you’re planning to build one yourself.
I thought GPT5 and the other models used in ChatGPT are decoder only model, I see the diagram has a encoder/decoder. Also, during inferencing tokenization, prefill and decoding are the three main phases. It would have been great if the blog covered those concepts. I somehow felt that the steps mentioned here may not be exactly what chatgpt does when a request is sent to it, but I would like to get myself corrected if my observation is incorrect.
u re correct
ChatGPT dropped a new product called Agent Builder last week and I couldn’t wait to jump on the bandwagon and play around with it.
I tested it out by wrapping my API endpoint inside a custom MCP server, then hooked it up as a custom server in ChatGPT’s new Agent Builder, right inside the drag-and-drop canvas.
While connecting my own custom MCP server, I ran into a few little gotchas that you’ll definitely want to know about if you’re planning to build one yourself.
https://xianli.substack.com/p/how-i-connected-a-custom-mcp-server
excellent!!
thanks