The increasing number of smart contracts has gained significant attention to the urgency of robust and scalable vulnerability detection techniques to mitigate substantial financial risks associated with their immutable nature on blockchain platforms. This paper introduces structured reasoning prompts using agent-role chaining for vulnerability detection without fine-tuning that utilizes model capacity to enhance smart contract vulnerability detection through zero-shot and structured prompting engineering. By carefully defining agent roles and embedding explicit reasoning steps within structured prompts for large language models (LLMs), the proposed method exploits the inherent reasoning capabilities of LLM to identify security flaws of smart contracts without extensive model retraining. Experimental results demonstrate the effectiveness of the system in achieving competitive performance compared to existing vulnerability detection techniques, highlighting the potential of prompt engineering as an efficient and adaptable strategy for bolstering smart contract security.