We know one shortcoming of historical simulation (HS) is the result highly depends on the choice of sample data length, and VaR result does not vary often or changes suddenly. Despite this weakness, HS is still popular due to its obvious advantage: easy to implement, and no distribution assumption required, which is especially appealing if the estimate of distribution assumption is difficult.
Several ways have been proposed to improve HS's performance, here are two selected methods with good results I personally use. The first one is named The Best of Both Worlds: A Hybrid Approach to Calculating Value at Risk by Jacob Boudoukh1, Matthew Richardson and Robert F. Whitelaw. By hybrid it means this approach is a combination of parametric method and HS. The basic idea is since we can allocate larger weight to recent data and smaller weight to remote data for exponential weighted moving average (EWMA) volatility calculation, hence improves the backtesting performance of parametric method, why can't we then apply a similar principle to historical simulation? does it make sense? so it estimates the VaR of a portfolio by applying exponentially declining weights to past returns and then finding the appropriate percentile of this time weighted empirical distribution. It does improve compared with the vanilla historical simulation and EWMA parametric method based on empirical results.
The second method is named Incorporating Volatility Updating into The Historical Simulation Method for Value at Risk by John Hull and Alan White. The idea is to "adjust" return based on the ratio of current volatility to the past volatility, and use historical simulation on the adjusted returns. Supposing today's volatility is 20%, while volatility was say, 30%, then past returns obviously exaggerate the current market situation if used directly. The empirical results also shows this method even outperforms the first one.
Normally few lines of codes are enough for this adjustment, please read my post for the empirical results.